# **The Everyday Life of an Algorithm**

# **Daniel Neyland**

# The Everyday Life of an Algorithm

Daniel Neyland

# The Everyday Life of an Algorithm

Daniel Neyland Department of Sociology Goldsmiths, University of London London, UK

#### ISBN 978-3-030-00577-1 ISBN 978-3-030-00578-8 (eBook) https://doi.org/10.1007/978-3-030-00578-8

Library of Congress Control Number: 2018959729

© The Editor(s) (if applicable) and The Author(s) 2019. This book is an open access publication. **Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specifc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affliations.

Cover illustration: © Harvey Loake

This Palgrave Pivot imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

# Acknowledgements

Thanks to the algorithms who took part in this book. You know who you are. And you know who I am too. I am the human-shaped object. Thanks to the audiences who have listened, watched and become enwrapped by the algorithms. Your comments have been noted. Thanks to Inga Kroener and Patrick Murphy for their work. Thanks to Sarah, and to Thomas and George who have been learning about algorithms at school. And thanks to Goldsmiths for being the least algorithmic institution left in Britain. The research that led to this book was funded by European Research funding, with an FP7 grant (no. 261653) and under the ERC project MISTS (no. 313173).

# Contents


# List of Figures


# Introduction: Everyday Life and the Algorithm

**Abstract** This chapter introduces the recent academic literature on algorithms and some of the popular concerns that have been expressed about algorithms in mainstream media, including the power and opacity of algorithms. The chapter suggests that, in place of opening algorithms to greater scrutiny, the academic literature tends to play on this algorithmic drama. As a counter move, this chapter suggests taking seriously what we might mean by the everyday life of the algorithm. Several approaches to everyday life are considered and a set of three analytic sensibilities developed for interrogating the everyday life of the algorithm in subsequent chapters. These sensibilities comprise: how do algorithms participate in the everyday? How do algorithms compose the everyday? And how (to what extent, through what means) does the algorithmic become the everyday? The chapter ends by setting out the structure of the rest of the book.

**Keywords** Science and Technology Studies · Accountability · Opacity · Transparency · Power · The Everyday

# Opening

An algorithm is conventionally defned as 'a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer'.1 In this sense, an algorithm strictly speaking is nothing more than the ordering of steps that a combination of software and hardware might subsequently put into operation. It might seem odd, then, to write a book about the everyday life of a set of instructions. What life might the instructions have led, into what romance or crime might the instructions have become entangled, what disappointments might they have had? These seem unlikely questions to pose. For an ethnographer, they also seem like questions that would be diffcult to pursue. Even if the instructions were engaged in a variety of different social interactions, where do these take place and how could I ever get to know them?

A quick perusal of the instructions hanging around in my house reveals a slightly crumpled paper booklet on my home heating system, two sets of colourful Lego manuals setting out how to build a vehicle, and a form with notes on how to apply for a new passport. I have no idea how the latter arrived in my house or for whom it is necessary. But it is clear in its formality and precision. I also know my sons will shortly be home from school and determined in their efforts to build their new Lego. And I am aware, but slightly annoyed, by the demands set by the heating instructions that suggest my boiler pressure is too high (above 1.5 bars; after a quick Google, it turns out that a bar is the force required to raise water to a height of 10 metres). The pressure needs to be reduced, and I have known this all week and not acted on it. The instructions have annoyed me by instilling a familiar sense of inadequacy in my own (in)ability to manage my domestic affairs—of course, the instructions provide numbers, a written diagram, even some words, but their meanings and my required response remain out of reach.

In a sense, then, we are already witnessing the social life in which these instructions participate. The passport form has arrived from somewhere, for someone, and is clear in its formal status. The Lego was a gift and will no doubt become the centre of my children's attention. And the heating system might break down if I don't do something reasonably soon. These point to some of the cornerstones for contemporary living. Travel and transport, government and formal bureaucracy, gift giving and learning, domestic arrangements and shelter are all witness-able through the instructions and the life in which they participate. As with other participants in social life, the instructions are demanding, occasionally quite austere and/or useless. Making sense of these everyday entanglements might be quite important if we were interested in the everyday life of instructions, but is this the kind of everyday life in which algorithms participate?

Reading through the ever-expanding recent academic literature on algorithms, the answer would be a qualifed no. The everyday, humdrum banalities of life are somewhat sidelined by an algorithmic drama.2 Here, the focus is on algorithmic power, the agency held by algorithms in making decisions over our futures, decisions over which we have no control. The algorithms are said to be opaque, their content unreadable. A closely guarded and commodifed secret, whose very value depends upon retaining their opacity. All we get to see are their results: the continuing production of a stream of digital associations that form consequential relations between data sets. We are now data subjects or, worse, data derivatives (Amoore 2011). We are rendered powerless. We cannot know the algorithm or limit the algorithm or challenge its outputs.

A quick read through the news (via a search 'algorithm') reveals further numerous stories of the capacity of algorithms to dramatically transform our lives. Once again, the humdrum banalities of the everyday activities that the instructions participated in are pushed aside in favour of a global narrative of unfolding, large-scale change. In the UK, the Guardian newspaper tells us that large frms are increasingly turning to algorithms to sift through job applications,3 using personality tests at the point of application as a way to pick out patterns of answers and steer applicants towards rejection or the next phase of the application process. What is at stake is not the effectiveness of the algorithms, as little data is collected on whether or not the algorithms are making the right decisions. Instead, the strength of the algorithms is their effciency, with employment decisions made on a scale, at a speed and at a low cost that no conventional human resources department could match.

In the USA, we are told of algorithmic policing that sets demands for police offcers to continually pursue the same neighbourhoods for potential crime.4 Predictive policing does not actively anticipate specifc crimes, but uses patterns of previous arrests to map out where future arrests should be made. The algorithms create their own effects as police offcers are held accountable by the algorithm for the responses they make to the system's predictions. Once a neighbourhood has acquired a statistical pattern denoting high crime, its inhabitants will be zealously policed and frequently arrested, ensuring it maintains its high crime status.

Meanwhile in Italy, Nutella launch a new marketing campaign in which an algorithm continually produces new labels for its food jars.5 Seven million distinct labels are produced, each feeding off an algorithmically derived set of colours and patterns that, the algorithm believes, consumers will fnd attractive. The chance to own a limited edition Nutella design, combined with these newspaper stories and an advertising campaign, drives an algorithmically derived consumer demand. But the story is clear: it is not the labels that are unique in any important way. It is the algorithm that is unique.

And in India, a robot that uses algorithms to detect patterns of activity in order to offer appropriate responses struggles with normativity.6 The robot fnds it hard to discern when it should be quiet or indeed noisier, what counts as a reasonable expectation of politeness, which subtle behavioural cues it should pick up on or to which it should respond. This is one small part of the unfolding development of algorithmic artifcial intelligence and the emergence of various kinds of robots that will apparently replace us humans.

These stories are doubtless part of a global algorithmic drama. But in important ways, these stories promote drama at the expense of understanding. As Ziewitz (2016) asks: just what is an algorithm? In these stories, the algorithm seems to be a central character, but of what the algorithm consists, why, how it participates in producing effects is all left to one side. There are aspects of everyday life that are emphasised within these stories: employment, policing, consumer demand and robotics are each positioned in relation to an aspect of ordinary activity from job interviews, to arrests and court trials, from markets and investments to the future role of robots in shaping conversations. But—and this seems to be the important part—we are not provided with any great insight into the everyday life *of the algorithm*. Through what means are these algorithms produced in the frst place, how are they imagined, brought into being and put to work? Of what do the algorithms consist and to what extent do they change? What role can be accorded to the algorithm rather than the computational infrastructure within which it operates? And how can we capture the varied ways in which algorithms and everyday life participate in the composition of effects?

These are the questions that this book seeks to engage. As I noted in the opening example of the instructions in various locations around my house, ordered sets of step-by-step routines can establish their own specifc demands and become entangled in some of the key social relations in which we participate. As I have further suggested in the preceding media stories, such ordered routines in the form of algorithms portray a kind of drama, but one that we need to cut through in order to investigate how the everyday and algorithms intersect. In the next section, I will begin this task by working through some of the recent academic literature on algorithms. I will then pursue the everyday as an important foreground for the subsequent story of algorithms. Finally, I will set out the structure of the rest of this book.

### Algorithmic Discontent

One obvious starting point for an enquiry into algorithms is to look at an algorithm. And here, despite the apparent drama of algorithmic opacity (in Fig. 1.1) is an algorithm:

This is taken from a project that sought to develop an algorithmic surveillance system for airport and train station security (and is introduced in more detail along with the airport and train station and their peculiar characteristics in Chapter 2). The algorithm is designed as a set of ordered step-by-step instructions for the detection of abandoned luggage. It is similar in some respects to the instructions for my

**Fig. 1.1** Abandoned luggage algorithm

home heating system or my children's Lego. It is designed as a way to order the steps necessary for an effect to be brought about by others. However, while my heating system instructions are (nominally and slightly uselessly) oriented towards me as a human actor, the instructions here are for the surveillance system, its software and hardware which must bring about these effects (identifying abandoned luggage) for the system's human operatives. In this sense, the algorithm is oriented towards human and non-human others. Making sense of the algorithm is not too diffcult (although bringing about its effects turned out to be more challenging as we shall see in subsequent chapters). It is structured through four initial conditions (IF questions) that should lead to four subsequent consequences (THEN rules). The conditions required are: IF an object is identifed within a set area that is classifed as luggage, is separate from a human object, is above a certain distance from a human and for a certain time (with a threshold for distance and time set as required), THEN an 'abandoned luggage' alert will be issued. What can the recent academic literature on algorithms tell us about this kind of ordered set of instructions, conditions and consequences?

Recent years have seen an upsurge in writing on algorithms. This literature points to a number of notable themes that have helped establish the algorithm as a focal point for contemporary concern. Key has been the apparent power of algorithms (Beer 2009; Lash 2007; Slavin 2011; Spring 2011; Stalder and Mayer 2009; Pasquale 2015) that is given effect in various ways. Algorithms are said to provide a truth for modern living, a means to shape our lives, play a central role in fnancial growth and forms of exchange and participate in forms of governmentality through which we become algorithmic selves. In line with the latter point, it is said we have no option but to make sense of our own lives on the terms of algorithms as we are increasingly made aware of the role, status and infuence of algorithms in shaping data held about us, our employment prospects or our intimate relations. At least two notions of power can be discerned, then, in these accounts. There is a traditional sense of power in which algorithms act to infuence and shape particular effects. In this sense, algorithms might be said to hold power. A second notion of power is more Foucauldian in its inclination, suggesting that algorithms are caught up within a set of relations through which power is exercised. Becoming an algorithmic self is thus an expression of the exercise of this power, but it is not a power held by any particular party. Instead, it is a power achieved through the plaiting of multiple relations. In either case, the algorithm is presented as a new actor in these forms and relations of power.

What can this tells us about our abandoned luggage algorithm? Written on the page, it does not seem very powerful. I do not anticipate that it is about to jump off the page (or screen) and act. It is not mute, but it also does not appear to be the bearer of any great agency. The notion that this algorithm in itself wields power seems unlikely. Yet its ordered set of instructions does seem to set demands. We might then ask for whom or what are these demands set? In making sense of the everyday life of this algorithm, we would want to pursue these demands. If the academic writing on power is understood as having a concern for effect, then we might also want to make sense of the grounds on which these demands lead to any subsequent action. We would have to follow the everyday life of the algorithm from its demands through to accomplishing a sense of how (to any extent) these demands have been met. This sets a cautionary tone for the traditional notion of power. To argue that the demands lead to effects (and hence support a traditional notion of power, one that is held by the algorithm) would require a short-cutting of all the steps. It would need to ignore the importance of the methods through which the algorithm was designed in the frst place, the software and hardware and human operatives that are each required to play various roles, institute further demands and that take action off in a different direction (see Chapters 3 and 5 in particular), before an effect is produced. We would need to ignore all these other actors and actions to maintain the argument that it is the algorithm that holds power. Nonetheless, the Foucauldian sense of power, dispersed through the ongoing plaiting of relations, might still hold some analytic utility here: pursuing the everyday life of the algorithm might provide a means to pursue these relations and the effects in which they participate.

At the same time as algorithms are noted as powerful (in the sense of holding power) or part of complex webs of relations through which power is exercised, an algorithmic drama (see Ziewitz 2016; Neyland 2016) plays out through their apparent inscrutability. To be powerful and inscrutable seems to sit centrally within a narrative of algorithmic mystery (just how do they work, what do algorithms do and how do they accomplish effect) that is frequently combined with calls for algorithmic accountability (Diakopolous 2013). Accountability is presented as distinct from transparency. While the latter might have utility for presenting the content or logic of an algorithm, accountability is said to be necessary for interrogating its outcomes (Felten 2012). Only knowing the content of an algorithm might be insuffcient for understanding and deciding upon the relative justice of its effects. But here is where the drama is ratcheted up: the value of many commercial algorithms depends upon guarding their contents (Gillespie 2013). No transparency is a condition for the accumulation of algorithmically derived wealth. No transparency also makes accountability more challenging in judging the justice of an algorithm's effects: not knowing the content of an algorithm makes pinning down responsibility for its consequences more diffcult.

Our abandoned luggage algorithm presents its own contents. In this sense, we have achieved at least a limited sense of transparency. In forthcoming chapters, we will start to gain insights into other algorithms to which the abandoned luggage example is tied. But having the rules on the page does not provide a strong sense of accountability. In the preceding paragraphs, I suggested that insights into the everyday life of the algorithm are crucial to making sense of how it participates in bringing about effects. It is these effects and the complex sets of relations that in various ways underpin their emergence that need to be studied for the ordered steps of the abandoned luggage algorithm to be rendered accountable.

A further theme in recent writing has been to argue that algorithms should not be understood in isolation. Mythologizing the status or power of an algorithm, the capability of algorithms to act on their own terms or to straightforwardly produce effects (Ziewitz 2016) have each been questioned. Here, software studies scholars have suggested we need to both take algorithms and their associated software/code seriously and situate these studies within a broader set of associations through which algorithms might be said to act (Neyland and Mollers 2016). Up-close, ethnographic engagement with algorithms is presented as one means to achieve this kind of analysis (although as Kitchin [2014] points out, there are various other routes of enquiry also available). Getting close to the algorithm might help address the preceding concerns highlighted in algorithmic writing; opening up the inscrutable algorithm to a kind of academic accountability and deepening our understanding of the power of algorithms to participate in the production of effects. This further emphasises the importance of grasping the everyday life of the algorithm. How do the ordered steps of the abandoned luggage algorithm combine with various humans (security operatives, airport passengers, terminal managers and their equivalents in train stations) and non-humans (luggage, airports, software, trains, tracks, hardware) on a moment to moment basis?

Yet algorithmic writing also produces its own warnings. Taken together, writing on algorithms suggests that there is not one single matter of concern to take on and address. Alongside power, inscrutability and accountability, numerous questions are raised regarding the role of algorithms in making choices, political preferences, dating, employment, fnancial crises, death, war and terrorism (Crawford 2016; Karppi and Crawford 2016; Pasquale 2015; Schuppli 2014) among many other concerns. The suggestion is that algorithms do not operate in a single feld or produce effects in a single manner or raise a single question or even a neatly bounded set of questions. Instead, what is required is a means to make sense of algorithms as participants in an array of activities that are all bound up with the production of effects, some of which are unanticipated, some of which seem messy and some of which require careful analysis in order to be made to make sense. It is not the case that making sense of the life of our abandoned luggage algorithm will directly shed light on all these other activities. However, it will provide a basis for algorithmically focused research to move forward. This, I suggest, can take place through a turn to the everyday.

### Everyday

Some existing academic work on algorithms engages with 'algorithmic life' (Amoore and Piotukh 2015). But this tends to mean the life of humans as seen (or governed) through algorithms. If we want to make sense of algorithms, we need to engage with their everyday life. However, rather than continually repeat the importance of 'the everyday' as if it is a concept that can somehow address all concerns with algorithms or is in itself available as a neat context within which things will make sense, instead I suggest we need to take seriously what we might mean by the 'everyday life' of an algorithm. If we want to grasp a means to engage with the entanglements of a set of ordered instructions like our abandoned luggage algorithm, then we need to do some work to set out our terms of engagement.

The everyday has been a focal point for sociological analysis for several decades. Goffman's (1959) pioneering work on the dramaturgical staging of everyday life provides serious consideration of the behaviour, sanctions, decorum, controls and failures that characterise an array of situations. De Certeau (1984) by switching focus to the practices of everyday life brings rules, bricolage, tactics and strategies to the centre of his analysis of the everyday. And Lefebvre (2014) suggests across three volumes that the everyday is both a site of containment and potential change. The everyday of the algorithm will be given more consideration in subsequent chapters, but what seems apparent in these works is that for our purposes, the technologies or material forms that take part in everyday life are somewhat marginalised. Technologies are props in dramaturgical performances (in Goffman's analysis of the life of crofters in the Shetland Islands) or a kind of background presence to practices of seeing (in de Certeau's analysis of a train journey). Lefebvre enters into a slightly more detailed analysis of technology, suggesting for example that 'computer scientists proclaim the generalization of their theoretical and practical knowledge to society as a whole' (2014: 808). But Lefebvre's account is also dismissive of the analytic purpose of focusing on technologies as such, suggesting 'it is pointless to dwell on equipment and techniques' (2014: 812). Taken together, as far as that is possible, these authors' work suggests few grounds for opening up the everyday life of technology. Perhaps the most that could be said is that, based on these works, an analysis of the everyday life of an algorithm would need to attend to the human practices that then shape the algorithm. Even everyday analyses that devote lengthy excursions to technology, such as Braudel's (1979) work on everyday capitalism, tend to treat technologies as something to be catalogued as part of a historical inventory. To provide analytical purchase on the algorithm as a participant in everyday life requires a distinct approach.

One starting point for taking the everyday life of objects, materials and technologies seriously can be found in Latour's search for the missing masses. According to Latour, sociologists:

are constantly looking, somewhat desperately, for social links sturdy enough to tie all of us together… The society they try to recompose with bodies and norms constantly crumbles. Something is missing, something that should be strongly social and highly moral. Where can they fnd it? … To balance our accounts of society, we simply have to turn our exclusive attention away from humans and look also at nonhumans. Here they are, the hidden and despised social masses who make up our morality. They knock at the door of sociology, requesting a place in the accounts of society as stubbornly as the human masses did in the nineteenth century. What our ancestors, the founders of sociology, did a century ago to house the human masses in the fabric of social theory, we should do now to fnd a place in a new social theory for the nonhuman masses that beg us for understanding. (1992: 152–153)

Here, the non-humans should not simply be listed as part of an inventory of capitalism. Instead, their role in social, moral, ethical and physical actions demands consideration. But in this approach, 'social' is not to be understood on the conventional terms of sociologists as a series of norms that shape conduct or as a context that explains and accounts for action. Instead, efforts must be made to make sense of the means through which associations are made, assembled or composed. Everyday life, then, is an ongoing composition in which humans and non-humans participate. The algorithm might thus require study not as a context within which everyday life happens, but as a participant. Such a move should not be underestimated. Here, Latour tells us, we end the great divide between social and technical, and assumptions that humans ought to hold status over non-humans in our accounts. Instead, we start to open up an array of questions. As Michael suggests, in this approach: 'everyday life is permeated by technoscientifc artefacts, by projections of technoscientifc futures and by technoscientifc accounts of the present' (2006: 9).

We can also start to see in this move to grant status to the non-human that questions can open up as to precisely how such status might be construed. Assembly work or composition certainly could provide a way to frame a study of the algorithm as a participant in everyday action, but how does the algorithm become (or embody) the everyday? Mol (2006) suggests that the nature of matters—questions of ontology—are accomplished. In this line of thought, it is not that 'ontology is given before practices, but that different practices enable different versions of the world. This turns ontology from a pre-condition for politics into something that is, itself, always at stake' (Mol 2006: 2). The analytic move here is not just to treat the algorithm as participant, but to understand that participation provides a grounds for establishing the nature of things, a nature that is always at stake. Being at stake is the political condition through which the nature of things is both settled and unsettled. But what does this tell us of the everyday?

Pollner's (1974) account of mundane reason points us towards a detailed consideration of the interactions through which the everyday is accomplished. Pollner draws on the Latin etymology of the word mundane (mundus) to explore how matters are not just ordinary or pervasive, but become of the world. What is settled and unsettled, what is at stake, is this becoming. For Pollner, the pertinent question in his study of US court decisions on speeding is how putative evidence that a car was driving at a certain speed can become of the world of the court charged with making a decision about a drivers' possible speeding offence. Through what organisational relations, material stuff, responsibilities taken on, and accountabilities discharged, can potential evidence come to be of the world (a taken for granted, accepted feature) of the court's decision-making process? Pollner suggests that in instances of dispute, accountability relations are arranged such that a car and its driver cannot be permitted to drive at both 30 and 60 miles per hour simultaneously—the evidence must be made to act on behalf of one of the accounts (30 or 60), not both. Selections are made in order to demarcate what will and what will not count, what will become part of the world of the court and what will be dismissed, at the same time as responsibilities and accountabilities for action are distributed and their consequences taken on. Making sense of the algorithm, its enactment of data, its responsibilities and accountabilities on Pollner's terms, sets some demanding requirements for our study. How does the abandoned luggage algorithm that we initially encountered, insist that data acts on behalf of an account as human or luggage, as relevant or irrelevant, as requiring an alert and a response or not? Although these actions might become the everyday of the algorithm, they might be no trivial matter for the people and things of the airport or train station where the algorithm will participate. The status of people and things will be made always and already at stake by the very presence of the algorithm.

This further points towards a distinct contribution of the algorithm: not just participant, not just at stake in becoming, but also a means for composing the everyday. To return to Latour's no-longer-missing masses, he gives consideration to an automated door closer—known as a groom—that gently closes the door behind people once they have entered a room. The groom, Latour suggests, can be treated as a participant in the action in three ways:

frst, it has been made by humans; second, it substitutes for the actions of people and is a delegate that permanently occupies the position of a human; and third, it shapes human action by prescribing back what sort of people should pass through the door. (1992: 160)

Prescribing back is the means through which the door closer acts on the human, establishing the proper boundaries for walking into rooms and the parameters for what counts as reasonably human from the groom's perspective (someone with a certain amount of strength, ability to move and so on). Prescribing acts on everyday life by establishing an engineered morality of what ought to count as reasonable in the human encounters met by the groom. This makes sense as a premise: to understand the abandoned luggage algorithm's moves in shaping human encounters, we might want to know something of how it was made by humans, how it substitutes for the actions of humans and what it prescribes back onto the human (and these will be given consideration in Chapter 3). But as Woolgar and Neyland (2013) caution, the certainty and stability of such prescribing warrants careful scrutiny. Prescribing might, on the one hand, form an engineer's aspiration (in which case its accomplishment requires scrutiny) or, on the other hand, might be an ongoing basis for action, with humans, doors and grooms continuously involved in working through the possibilities for action, with the breakdown of the groom throwing open the possibility of further actions. In this second sense, prescribing is never more than contingent (in which case its accomplishment also requires scrutiny!).

Collectively these ideas seem to encourage the adoption of three kinds of analytical sensibility7 for studying the everyday life of an algorithm. First, how do algorithms participate in the everyday? Second, how do algorithms compose the everyday? Third, how (to what extent, through what means) does the algorithmic become the everyday? These will be pursued in subsequent chapters to which I will now turn attention.

### The Structure of the Argument

Building on the abandoned luggage algorithm, Chapter 2 will set out the algorithms and their human and non-human associations that will form the focus for this study. The chapter will focus on one particular algorithmic system developed for public transport security and explore the ways in which the system provided a basis for experimenting with what computer scientists termed human-shaped objects. In contrast with much of the social science literature on algorithms that suggests the algorithm itself is more or less fxed or inscrutable, this chapter will instead set out one basis for ethnographically studying the algorithm up-close and in detail. Placing algorithms under scrutiny opens up the opportunity for studying their instability and the ceaseless experimentation to which they are subjected. One basis for organising this experimentation is what Glynn (2010) terms elegance. Drawing on the recent growth of qualitative social science experimentation (Adkins and Lury 2012; Marres 2013; Corsín Jiménez and Estalella 2016), the chapter will consider how elegance is (and to an extent is not) accomplished. A fundamental proposition of the security system under development was that the algorithmic system could sift through streams of digital video data, recognise and then make decisions regarding humans (or at least humanshaped objects). Rather than depending on the laboratory experiments of natural science and economics analysed by science and technology studies (STS) scholars (focusing on the extension of the laboratory into the world or the world into the laboratory; Muniesa and Callon 2007), working on human-shaped objects required an ongoing tinkering with a sometimes bewildering array of shared, possible and (as it turned out) impossible controls. The chapter will suggest that elegance opens up a distinct way to conceive of the experimental prospects of algorithms under development and their ways of composing humans.

Chapter 3 will then develop the insights of Chapter 2 (on humanshaped objects and elegance) in exploring the possibility of rendering the everyday life of algorithms accountable, and the form such accountability might take. Although algorithmic accountability is currently framed in terms of openness and transparency (James 2013; Diakopoulos 2013), the chapter will draw on ethnographic engagements with the algorithmic system under development to show empirically the diffculties (and indeed pointlessness) of achieving this kind of openness. Rather than presenting an entirely pessimistic view, the chapter will suggest that alternative forms of accountability are possible. In place of transparency, the chapter focuses on STS work that pursues the characteristics, agency, power and effect of technologies as the upshot of the network of relations within which a technology is positioned (Latour 1990; Law 1996). Moving away from the idea that algorithms have fxed, essential characteristics or straightforward power or agency, opens up opportunities for developing a distinct basis of accountability in action.

While experimentation provides one means to take the algorithmic literature in a different direction, in the system under development, deletion also opened up an analytic space for rethinking algorithms. Deletion became a key priority of the emerging system: just how could terabytes of data be removed from a system, freeing up space, reducing costs while not threatening the kinds of security-related activities the system was designed to manage? In Chapter 4, the calculative basis for deletion will be used to draw together studies of algorithms with the long-standing STS interest in the calculative. STS work on calculation raises a number of challenging questions. These include how accuracy is constructed (MacKenzie 1993), the accomplishment of numeric objectivity (Porter 1995), trading, exchange and notions of equivalence (Espeland and Sauder 2007; MacKenzie 2009). The kinds of concern articulated in these works is not focused on numbers as an isolated output of calculation. Instead, numbers are considered as part of a series of practical actions involved in, for example, solving a problem (Livingston 2006), distributing resources, accountabilities or responsibilities for action (Strathern 2002), governing a country (Mitchell 2002) and ascertaining a value for some matter (Espeland and Sauder 2007; MacKenzie 2009). Attuning these ideas to algorithms provides insight into not just the content of an algorithm, but its everyday composition, effects and associated expectations. However, deletion also poses a particular kind of problem: the creation of nothing (the deleted) needs to be continually proven. The chapter explores the complexities of calculating what ought to be deleted, what form such deletion ought to take and whether or not data has indeed been deleted. These focal points and the diffculties of providing proof are used to address suggestions in contemporary research that algorithms are powerful and agential, easily able to enact and execute orders. Instead, the chapter calls for more detailed analysis (picked up in the next chapter) of what constitutes algorithmic success and failure.

Following on from Chapter 4's call for more detailed analysis of success and failure, Chapter 5 explores the problems involved in demonstrating an algorithmic system to a variety of audiences. As the project team reached closer to its fnal deadlines and faced up to the task of putting on demonstrations of the technology under development to various audiences—including the project funders—it became ever more apparent that in a number of ways the technology did not work. That is, promises made to funders, to academics, to potential end users on an advisory board, to ethical experts brought in to assess the technology may not be met. In project meetings, it became rapidly apparent that a number of ways of constituting a response to different audiences and their imagined demands could be offered. This was not simply a binary divide between providing a single truth or falsehood. Instead, a range of different more or less 'genuine' demonstrations with greater or lesser integrity were discursively assembled by the project team, and ways to locate and populate, witness and manage the assessment of these demonstrations were brought to the table. Holding in place witnesses, technologies and practices became key to successfully demonstrating the algorithm. In this chapter, the notion of integrity will be used to suggest that ideas of sight, materiality and morality can be reworked and incorporated into the growing literature on algorithms as a basis for investigating the everyday life of what it means for an algorithmic system to work.

The fnal chapter explores how a market can be built for an algorithmic system under development. It draws together studies of algorithms with the growing literature in STS on markets and the composition of fnancial value (Callon 1998; MacKenzie et al. 2007; Muniesa et al. 2007, 2017). In particular, it focuses on performativity (see, e.g., MacKenzie et al. 2007; MacKenzie 2008; Cochoy 1998; drawing on Austin 1962). Although STS work on markets has on occasions looked into fnancial trading through algorithms, the move here is to explore market making for algorithms. To accomplish this kind of market work and build a value for selecting relevance and deleting irrelevance, the project co-ordinators had to look beyond accountable outputs of technical certainty (given that, as we will have seen in Chapter 5, the machine had trouble delineating relevance and adequately deleting data). Instead, they looked to build a market for the algorithmic system through other means. From trying to sell technological effcacy, the project co-ordinators instead sought to build a market of willing customers (interested in a technology that might enable them to comply with emerging regulations) who were then constituted as a means to attract others to (potentially) invest in the system. Building a market here involved different kinds of calculations (such as Compound Annual Growth Rates for the felds in which the system might be sold) to forecast a market share. This might accomplish value by attracting interested parties whose investments might bring such a forecast closer to reality. The calculations had to enter a performative arena. This fnal chapter will suggest that market work is an important facet of the everyday life of an algorithm, without which algorithmic systems such as the one featured in this book, would not endure. The chapter concludes with an analysis of the distinct and only occasionally integrated everyday lives of the algorithm.

### Notes


### References

Adkins, L., & Lury, C. (2012). *Measure and Value*. London: Wiley-Blackwell.


Cochoy, F. (1998). Another Discipline for the Market Economy: Marketing as a Performative Knowledge and Know-How for Capitalism. In M. Callon (Ed.), *The Laws of the Market* (pp. 194–221). Oxford: Blackwell.


*Search. The Politics of Search Beyond Google* (pp. 98–115). Piscataway, NJ: Transaction Publishers.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# Experimentation with a Probable Human-Shaped Object

**Abstract** This chapter sets out the algorithms that will form the focus for this book and their human and non-human associations. The chapter focuses on one particular algorithmic system developed for public transport security and explores the ways in which the system provided a basis for experimenting with what computer scientists termed human-shaped objects. In contrast to much of the social science literature on algorithms that suggests the algorithm itself is more or less fxed or inscrutable, this chapter will instead set out one basis for ethnographically studying the algorithm up-close and in detail. Placing algorithms under scrutiny opens up the opportunity for studying their instability and the ceaseless experimentation to which they are subjected. An important basis for this experimentation, the chapter will suggest, is elegance. The chapter will suggest that elegance opens up a distinct way to conceive of the experimental prospects of algorithms under development and their ways of composing humans.

**Keywords** Experimentation · Human-shaped objects · Elegance · Objects

## Opening

How can we get close to the everyday life of an algorithm? Building on the Introduction to this book, how can we make sense of the ways algorithms participate in everyday life, compose the everyday or are continually involved in the becoming of the everyday? In this chapter, I will focus on the question of how an algorithm can (at least begin to) participate in everyday life. I will set out one particular project that provided a means to get close to the everyday life of an algorithm (or what might be more appropriately termed an 'algorithmic system', as we will go on to see). I will then investigate one focus for algorithmic experimentation in this project: efforts to identify humans. As the project developed, the notion of human-shaped objects became more and more apparent in project documents, meetings and demonstrations. We will look into what counts as an anticipated human-shaped object, and we will see how our algorithm struggles to grasp such objects. I will also suggest that the human-shaped object becomes itself a matter for experimentation. This is, I will contend, an important aspect of the everyday life of the algorithm: that it becomes entangled with the incredibly banal everyday life of the human (addressing questions such as what is the shape of a human). I will also suggest that the systems with which algorithms participate in construing effects are not stable and certainly not opaque within these project settings. Instead, algorithms and their system are continually inspected and tested, changed and further developed in order to try and grasp the human-shaped object. Within this experimental setting, we can hence note that the algorithms and their system do not operate entirely in line with expectations or within parameters that instantly make sense. Much of the everyday life of the algorithm is thus made up of attempts to get algorithms, their system, computer scientists or others to adequately account for what it is that they have produced. The chapter begins with a consideration of experimentation.

## What Is Experimentation?

The tradition of studying the experiment in science and technology studies (STS) has been focused around the space of the laboratory (Latour and Woolgar 1979), forms of expertise (Collins and Evans 2007) and devices (Latour 1987) that render the laboratory a centre of, for example, calculation. The laboratory becomes the space into which the outside world is drawn in order to approximate its conditions for an experiment. Or alternatively, the conditions of the laboratory are extended into the world beyond the laboratory in order to take the laboratory experiment to the world (Latour 1993). The experiment as such, then, becomes a replicable phenomenon through which some feature of the world is proclaimed. And we see some parallels drawn with economic experiments that similarly seek to draw up variables to manage and control, a world to be drawn into the economic laboratory or a set of conditions to be extended beyond the laboratory (Muniesa and Callon 2007). The economic experiment, like the laboratory experiment, is as much about demonstration, as it is about discovery (Guala 2008).

In Chapter 5, we will see that in the later stages of the everyday life of our algorithm, these concerns for control and demonstration came to the fore—particularly when research funders wanted to see results. But for now, our algorithm—the abandoned luggage algorithm from the Introduction—sits somewhat meekly and unknowing in an offce. It is waiting, but it does not know for what it waits: not for an experiment in the closely controlled sense of the term, not for a pristine laboratory space and not for a set of controlled variables, even human or luggage variables, through which it can demonstrate its capacity to grasp the world around it. To begin with it awaits experimentation.

Experimentation sits somewhat apart from experiments. In place of controls or neatly defned space come proposals, ideas, efforts to try things and see what happens. Experimentation can be as much a part of qualitative social science as it can be a part of algorithmic computer science. In the social sciences, experimentation has been used as an impetus by Marres (2013) to experimentalise political ontology and by Johansson and Metzger (2016) to experimentalise the organisation of objects. What these works point towards is the fundamental focus for experimentation: that the nature of just about anything can be rendered at stake within the experimental realm. Scholars have also begun to conceive of experimentalising economic phenomena (Wherry 2014; Muniesa 2016a, b). This is interesting for drawing our attention towards the ways in which what might otherwise be quite controlled, laboratory-like settings can be opened up for new lines of thought through experimentation. These works draw on a patchy history of social science experimentation that has tended to raise insights and ethical concerns in equal measure. One historical route for the development of such experimentation has been Garfnkel's (1963, 1967) breach experiments. Here, the aim was to disrupt—or breach—taken-for-granted features of everyday life in everyday settings in order to make those features available for analysis. But unlike the laboratory tradition, the breaches for Garfnkel were broadly experimental in the sense of providing some preliminary trials and fndings to be further worked on. They were heuristic devices, aiding the sluggish imagination that provoked new thoughts and new lines of enquiry. Our algorithm is awaiting such provocation. But it is also awaiting experimentation that opens up questions of very fundamental features of everyday life, such as what is a human and how ought we to know. And it awaits experimentation that opens up what might otherwise be a pristine laboratory space to new questions, new forms of liveliness.

Experimentation began from the outset of the project in which the algorithm was a participant. We will begin here with the initial development of the project in order to provide a prior step to experimentation. Although the experimentation was more open than a controlled laboratory experiment, it was not free from any constraints or expectations. The experimentation had a broad purpose that was successively set and narrowed as the experimentation proceeded. What was being experimented upon and what was anticipated as the outcome of experimentation was the product of successive rounds of experimentation. To make sense of these expectations, we need to see how the project produced its algorithms in the frst place.

# The Algorithmic Project

The project upon which this book is based began with an e-mail invitation: Would I be interested in participating in a project that involved the development of a new ' algorithmic', 'smart' and 'ethical' video-based surveillance system? The project coordinator informed me that the project would involve a large technology frm (TechFirm1), two large transport frms where the technology would be tested and developed (SkyPort, which owns and operates two large European city airports, and StateTrack, a large European state railway) and two teams of computer scientists (from University 1, UK, and University 2, Poland) and that the project would be managed by a consultancy frm (Consultor, Spain). I was being invited to oversee the ethics of the technology under development and to provide an (at least partially) independent ethical assessment. The project would involve developing a system that would use algorithms to select security-relevant images from the CCTV systems of the airport and train station. It would use this ability to demarcate relevance as a basis for introducing a new, ethical, algorithmic system.

The coordinator suggested the project would provide a location for experimentation with three ethical aims: that algorithms could be used to reduce the scope of data made visible within a video-based surveillance system by only showing 'relevant' images; that algorithms could be used to automatically delete the vast majority (perhaps 95%) of surveillance data that was not relevant; and that no new algorithms or surveillance networks would need to be developed to do so. These aims had been developed by the coordinator into an initial project bid. The coordinator hoped the 'ethical' qualities of the project were clear in the way the aims were positioned in the bid as a response to issues raised in popular and academic discussions about, for example, privacy, surveillance and excessive visibility (Lyon 2001; Davies 1996; Norris and Armstrong 1999; Bennett 2005; Van der Ploeg 2003; Taylor 2010; McCahill 2002) and concerns raised with algorithmic surveillance (Introna and Woods 2004). In particular, the project bid set out to engage with contemporary concerns regarding data retention and deletion, as very little data would be kept (assuming the technology worked).

The proposal was a success, and the project was granted €2.8m (about \$3.1m in mid-2015) under the European Union's 7th Framework Programme. A means to fulfl the promises of ethical algorithms committed to the project bid would now have to be found. This set the scene for early rounds of experimentation.

# Establishing the Grounds for Experimentation and the Missing Algorithms

The basis for initial experimentation within the project was a series of meetings between the project participants. Although there were already some expectations set in place by the funding proposal and its success, the means to achieve these expectations and the precise confguration of expectations had not yet been met. I now found myself sat in these meetings as an ethnographer, watching computer scientists talking to each other mostly about system architectures, media proxies, the fow of digital data—but not so much about algorithms.

Attaining a position as an ethnographer on this project had been the result of some pre-project negotiation. Following the project coordinator's request that I carry out an ethical review of the project under development, I had suggested that it might be interesting, perhaps vital, to carry out an assessment of the system under development. Drawing on recent work on ethics and design and ethics in practice,2 I suggested that what got to count as the ethics of an algorithm might not be easy to anticipate at the start of a project or from a position entirely on the outside. It might make more sense to work with the project team, studying the development of the technology and feeding in ethical questions and prompts over the three years of the project. Although this seemed to be an interesting proposition for project participants, questions were immediately raised regarding my ability to be both in the project (on the inside) and offer an ethical assessment (something deemed to be required from the outside). I suggested that during the course of the project I could use the emerging ethnography of system development to present the developing algorithm to various audiences who might feedback interesting and challenging questions, I could set up meetings with individuals who might provide feedback on the developing technology and I would put together an ethics board. The latter would be outsiders to the project, enabling me to move between an inside and outside position, working with, for example, the computer scientists in the project at some points and with the ethics board members at other moments. As we will see in Chapter 3, this ethical assessment formed one part of a series of questions regarding accountability that were never singularly resolved in the project. However, for now, at the outset, my role as ethnographer was more or less accepted, if not yet defned.

But what of the algorithms? In these early project meetings when I was still developing a sense of what the project was, what the technology might be, and what the challenges of my participation might involve, algorithms still retained their mystery. In line with academic writing on algorithms that emphasises their opacity (see Introduction to this book), at this moment the nature and potential of algorithms in the project was unclear. Occasionally during these meetings, algorithms were mentioned, and most project participants seemed to agree with the computer scientists from University 1 and University 2 that the 'algorithms' were sets of IF-THEN rules, already complete with associated software/code that could be 'dropped into' the system. The system seemed to be the thing that needed to be developed, not the algorithms. As we will see, this notion of 'dropping in' an algorithm turned out to be a wildly speculative and over-optimistic assessment of the role and ability of algorithms, but for now in project meetings, the system was key.

Establishing the precise set-up for the algorithmic system under development involved the computer scientists and transport frms involved in the project (the airport and train operator) discussing frst steps in technology development. Although this could be described as a negotiation, it mostly seemed to involve the computer scientists proposing system components and then later an order of system components (more or less setting out how data would fow through the system and how each component could talk to each other) and then the transport frms would respond. There was never an occasion where the transport frms would make the frst proposal. This seemed to be a result of the meetings being framed as technical discussions primarily, rather than being focused on, for example, usability. It was also the case that with more than a decade of experience in developing these systems, University 1 and University 2 computer scientists could talk with a fuency, eloquence and technical mastery that no one else could match. When the computer scientists made a proposal, it was up to the transport frms to accept or not the proposal and then it was down to the computer scientists to make any necessary adjustments.

But working together in this kind of complex multiparty, international project was not entirely straightforward. The experience of the computer scientists in developing these kinds of systems was thus a welcome contribution to the project in itself. It established a way of working that others could ft in with. Its absence might have meant a signifcant number of meetings to decide on ways of meeting. The meetings became framed as technical matters in which the computer scientists would lead partly because of the lack of any alternative way to frame meetings that anyone put forward. The ethnographer certainly didn't propose to have meetings framed around ethnography (at least not yet, see Chapters 3 and 5).

The meetings worked as follows. The participants would be gathered around a semicircle of tables or, on one occasion, an oval table, with a screen and projector at one end. Onto the screen would be projected a technical matter under discussion—often the system architecture. This set out the distinct components of the system under development and the role such components might play. Discussions in meetings then focused on the implications of setting up the system in one way or another, along with discussions of individual components of the system and, in early meetings, new components that might be needed. One computer scientist would sit at a laptop linked to the projector with the system architecture on their screen and make adjustments as discussions continued so that meeting participants could further discuss the emerging system. As they made changes on their laptop, these were projected onto a large screen for meeting participants to discuss. Sometimes two or more computer scientists would gather round the laptop to further discuss a point of detail or refne precisely what it was that had just been proposed in the meeting and what this might look like for the system. A typical architecture from one of the later meetings looked like this as shown in Fig. 2.1.

By this point in the project, it had been agreed that the existing surveillance cameras in transport hubs operated by SkyPort and StateTrack would feed into the system (these are represented by the black camera-shaped objects on the left). After much discussion, it had been agreed that the digital data from these cameras would need to feed into a media proxy. Both sets of computer scientists were disappointed to learn that the transport hubs to be included in the project had a range of equipment. Some cameras were old, some new, some high defnition and some not, and each came with a variable frame rate (the number of images a second that would fow into the algorithmic system). The media proxy was required to smooth out the inconsistencies in this fow of data in order that the next component in the system architecture would then be able to read the data. Inconsistent or messy data would prove troublesome throughout the project, but in these meetings, it was assumed that the media proxy would work as anticipated.

After some discussion, it was agreed that the media proxy would deliver its pristine data to two further system components. These comprised the Event Detection system and the Route Reconstruction

**Fig. 2.1** System architecture

system. The Event Detection system was where the algorithms (including the abandoned luggage algorithm of the Introduction to this book) would sit. The idea was that these algorithms would sift through terabytes of digital video data and use IF-THEN rules to select out those events that security operatives in a train station or airport would need to see. In discussions between the computer scientists and transport frms, it was agreed that abandoned luggage, people moving the wrong way (counter-fow) and people moving into forbidden areas (such as the train track in train stations or closed offces in airports) would be a useful basis for experimentation in the project. These would later become the basis for algorithmically experimenting with the basic idea that relevant images could be detected within fows of digital video data. For now, it was still assumed that algorithms could simply be dropped into this Event Detection component of the architecture. Relevant images would then be passed to the User Interface (UI) with all data deemed irrelevant passed to the Privacy Enhancement System. This was put forward as a key means to achieve the ethical aims of the project. It was suggested that only a small percentage of video data was relevant within an airport or train station, that only a small percentage of data needed to be seen and that the rest of the data could be stored briefy in the Privacy Enhancement System before being securely deleted. It later transpired that detecting relevant images, getting the algorithms to work and securely deleting data were all major technical challenges. But for now, in these early project meetings, it was assumed that the system would work as expected.

The Route Reconstruction component was a later addition. This followed on from discussions between the transport frms and the computer scientists, in which it became clear that having an image of, for example, an abandoned item of luggage on its own was not particularly useful in security terms. Transport operatives wanted to know who had left the luggage, where they had come from and where they went next. The theory behind the Route Reconstruction system (although see Chapter 3 for an analysis of this) was that it would be possible to use probabilistic means to trace the history around an event detected by an algorithm. The UI would then give operatives the option to see, for example, how an item of luggage had been abandoned, by whom, with whom they were walking and so on. This would mean that the Privacy Enhancement System would need to store data for as long as these reconstructions were required. It was assumed that most would be performed within 24 hours of an incident. Any data deemed relevant and any reconstructions viewed by operatives would be moved out of the auto-deletion feature of the Privacy Enhancement System and kept (in Video Storage). According to the computer scientists, this should still mean that around 95% of data was deleted and that the ethical aims to see less and store less data would be achieved.

The meetings were discursive fora where the computer scientists took the lead in making proposals and other participants, mostly the transport frms, offered their response. The overall effect was that the algorithmic system began to emerge and take shape, at least on a computer screen and as a projection. The components that would need to be developed were discussed, the future role of algorithms in Event Detection was more or less set, and a specifc shape was given to the project's ethical proposals. A technical means was proposed for limiting the range of data that would be seen and the amount of data that would be stored. As we will see, producing a UI and Route Reconstruction system (Chapter 3) and deleting data (Chapter 4) were problematic throughout the life of the project. However, for now we will retain our focus on experimenting with the human-shaped object.

### Elegance and the Human-Shaped Object

With the system architecture agreed at least in a preliminary form, the computer scientists could get on with the task of making the system work. Key to having a working system was to ensure that the data fowing from surveillance cameras through the Media Proxy could be understood by the Event Detection component. If events could not be detected, there would be no system. Figuring out ways to detect abandoned luggage, moving the wrong way and moving into a forbidden space were crucial. Central to Event Detection was the human-shaped object. As digital video was streamed through the system, the algorithms would need to be able to pick out human-shaped objects frst, and then the actions in which they were engaged second. Relevant actions for this experimental stage of system development would be a human-shaped object moving the wrong way, a human-shaped object moving into a forbidden space and a human-shaped object abandoning its luggage.

How could this human-shaped object be given a defnition that made sense for operationalisation within the system? The algorithms for Event Detection used in video analytic systems are a designed product. They take effort and work and thought and often an amount of reworking. The algorithms and their associated code for this project built on the decade of work carried out, respectively, by University 1 and University 2. As these long histories of development had been carried out by various colleagues within these Universities over time, tinkering with the algorithms was not straightforward. When computer scientists initially talked of 'dropping in' algorithms into the system, this was partly in the hope of avoiding many of the diffculties of tinkering, experimenting and tinkering again with algorithms that were only partially known to the computer scientists. As we saw in the Introduction with the abandoned luggage algorithm, the algorithm establishes a set of rules which are designed to contribute to demarcating relevant from irrelevant video data. In this way, such rules could be noted as a means to discern people and things that could be ignored and people and things that might need further scrutiny. If such a focus could hold together, the algorithms could indeed be dropped in. However, in practice, what constituted a human-shaped object was a matter of ongoing work.

Let's return to the subject of the Introduction to then explore experimentation with human-shaped objects. As a reminder, these are the IF-THEN rules for the abandoned luggage algorithm (Fig. 2.2):

**Fig. 2.2** Abandoned luggage algorithm

As I noted in the Introduction, what seems most apparent in these rules is the IF-THEN structure. At its simplest, the 'IF' acts as a condition and the 'THEN' acts as a consequence. In this particular algorithm, the IF-THEN rules were designed to operate in the following way. IF an object was detected within a stream of digital video data fed into the system from a particular area (notably a train station operated by StateTrack or an airport operated by SkyPort), THEN the object could be tentatively allocated the category of potentially relevant. IF that same object was deemed to be in the class of objects 'human-shaped', THEN that object could be tentatively allocated the category of a potentially relevant human-shaped object. IF that same human-shaped object was separate from a luggage-shaped object, THEN it could maintain its position as potentially relevant. IF that same human-shaped object and luggage-shaped object were set apart beyond a specifc distance threshold set by the system (say 2 or 10 metres) and the same objects were set apart beyond the temporal threshold set by the system (say 30 seconds or 1 minute)—that is, if the person and luggage were suffciently far apart for suffciently long—THEN an alert could be sent to surveillance operatives. The alert would then mean that the package of data relevant to the alert would be sent to the UI and operatives could then click on the data, watch the video of abandoned luggage and offer a relevant response (see Chapter 3). What is important for now is how these putative objects were given shape and divided into relevant and irrelevant entities.

If this structuring and division of various entities (humans, luggage, time, space, relevance and irrelevance) occurred straightforwardly and endured, it might be tempting to argue that this is where the power of algorithms is located or made apparent. A straightforward short cut would be to argue that the algorithm structures the social world and through this kind of statement we could then fnd what Lash (2007) refers to as the powerful algorithm and what Beer (2009) suggests are algorithms' ability to shape the social world. We could argue that the outputs of the system demonstrated an asymmetrical distribution of the ability to cause an effect—that it is through the algorithm that these divisions between relevant and irrelevant data can be discerned. However, such a short cut requires quite a jump from the algorithmic IF-THEN rules to their consequences. It ignores the everyday life in which the algorithm must be a successful participant for these kinds of effects to be brought about. If instead we pay attention to the everyday work required for algorithmic conditions and consequences to be achieved, what we fnd is not that the algorithm structures the social world. Instead, experimentation takes place (and fails, takes place again, things are reworked and then sometimes fail again or work to a small extent) to constitute the conditions required for the system to participate in the production of effects or the system gets partially redrawn to ft new versions of the conditions. This continual experimentation, rewriting and efforts to achieve conditions and consequences are not only central to the work of computer scientists but also crucial to the life of the algorithm. It is where the distinction between relevant and irrelevant data is continuously in the process of being made for the system. It is where the human-shaped object is drawn up and pursued. It is where the nature of things is made at stake.

The experimental basis designed to enable the algorithm to participate in the everyday life of the airport and train station had, what was for the ethnographer, a peculiar organising principle. The computer scientists of University 1 and University 2 talked of 'elegance' during the meetings around system architecture, huddled around the laptop on which they made updates to the system and in the subsequent humanshaped object experimentation that we will now consider. This seemed like an odd term to me in a series of meetings that mostly involved quite specialist, technical language. Elegance seemed to come from a different feld—perhaps fashion or furniture design. What could it mean for the computer scientists to talk of elegance, or rather how was the term elegance given meaning by the practical work of the computer scientists?

Ian Glynn (2010) captures something of what elegance can mean in his study of experiments and mathematical proofs. Glynn suggests elegance can be found in scientifc and mathematical solutions which combine concision, persuasion and satisfaction. As I followed the experimentation of the computer scientists, this approach to elegance seemed useful for making sense of the ways they discussed system architecture. The composition of the different system components, their location in relation to each other within the system architecture and how they might talk to each other was each discernible as a discussion focused on what might be elegant. However, this came to the fore even more strongly with the human-shaped object. What would count as concise, persuasive and satisfying as a human-shaped object seems a useful way to group together much of the discussion that took place.

For the IF conditions of the Event Detection algorithms to be achieved required coordinated work to bring together everyday competences (among surveillance operatives and computer scientists), the creation of new entities (including lines of code), the further development of components (from algorithmic rules to new forms of classifcation) and the development of a sense of what the everyday life was in which the algorithms would participate (in the train station or airport). It also required consideration of the changes that might come about in that everyday life. Elegance could be noted as the basis for this coordinated work in the following way. The frst point of contention was the technical basis for developing a means to classify putative objects. Readers will recall that frst identifying a putative object is important within the stream of digital video data in order that other data can be ignored. What might count as a human-shaped object or a luggage-shaped object as a precise focus for classifcation was vital. However, what might count as a concise means to achieve this classifcation was an important but slightly different objective. As project meetings progressed, it became clear that the amount of processing power required to sift through all the data produced in a train station or airport in real time and classify all human-shaped objects precisely would be signifcant. Face recognition, iris recognition and gait recognition (based on how people walked) were all ruled out as insuffciently concise. These approaches may have been persuasive as a means to identify specifc individuals in particular spaces, but their reliability depended on having people stand still in controlled spaces and have their features read by the system for a few seconds. This would not be very satisfying for passengers or airports whose business models depended on the rapid movement of passengers towards shops (Woolgar and Neyland 2013).

How then to be concise and satisfying and persuasive in classifying human-shaped objects? As Bowker and Star (2000) suggest, classifcation systems are always incomplete. This incompleteness ensures an ambiguity between the focus of algorithmic classifcation (the putative humanshaped object) and the entity being classifed (the possible human going about their business). Concision requires various efforts to classify to a degree that is satisfying and persuasive in relation to the needs of the system and the audiences for the algorithmic system. The system needs to do enough (be satisfying and persuasive), but no more than enough (be concise) as doing more than enough would require more processing of data. In the process of experimenting with human-shaped objects in this project, various more or less concise ways to classify were drawn up and considered or abandoned either because they would require too much processing power (probably quite persuasive but not concise) or were too inaccurate (quite concise, but produced results that were not at all persuasive). At particular moments, (not very serious) consideration was even given to change the everyday life into which the algorithms would enter in order to make classifcation a more straightforward matter. For example, computer scientists joked about changing the airport architecture to suit the system, including putting in higher ceilings, consistent lighting and fooring, and narrow spaces to channel the fow of people. These were a joke in the sense that they could never be accommodated within the project budget. Elegance had practical and fnancial constraints.

A frst move in classifying objects was to utilise a standard practice in video analytics: background subtraction. This method for identifying moving objects was somewhat time-consuming and processor intensive, and so not particularly elegant. But these efforts could be 'front-loaded' prior to any active work completed by the system. 'Front-loading' in this instance meant that a great deal of work would be done to produce an extensive map of the fxed attributes of the setting (airport or train station) prior to attempts at classifcation work. Mapping the fxed attributes would not then need to be repeated unless changes were made to the setting (such changes included in this project a change to a shopfront and a change to the layout of the airport security entry point). Producing the map provided a basis to inform the algorithmic system what to ignore, helping to demarcate relevance and irrelevance in an initial manner. Fixed attributes were thus nominally collated as non-suspicious and irrelevant in ways that people and luggage, for example, could not be, as these latter objects could not become part of the map of attributes (the maps were produced based on an empty airport and train station). Having a fxed map then formed the background from which other entities could be noted. Any thing that the system detected that was not part of the map would be given an initial putative identity as requiring further classifcation.

The basis for demarcating potentially relevant objects depended to some degree, then, on computer scientists and their understanding of spaces such as airports, maps that might be programmed to ignore for a time certain classes of objects as fxed attributes, and classifcation systems that might then also—if successful—provide a hesitant basis for selecting out potentially relevant objects. It is clear that anything that algorithms were able to 'do' was situated in continual reconfgurations of the entities involved—making sense of the everyday life of the algorithm was thus central.

Mapping for background subtraction was only a starting point. Objects noted as non-background entities then needed to be further classifed. Although background subtraction was a standard technique in video analytics, further and more precise object classifcation was the subject of some discussion in the project. Eventually, two means were set for classifying those entities noted as distinct from the background that might turn out to be human-shaped objects, and these became the focus for more intense experimentation in the project. The frst of these involved bounding boxes, and the second involved a more precise pixel-based classifcation. Both approaches relied on the same initial parameterisation of putative objects. To parameterise potential objects, models had to be computationally designed. This involved experimenting with establishing edges around what a human-shaped object was likely to be (in terms of height, width and so on). Other models then had to be built to parameterise other objects, such as luggage, cleaners' trolleys, signposts and other non-permanent attributes of the settings under surveillance. The models relied on 200-point vector analysis to set in place what made up the edges of the object under consideration and then to which model those edges suggested the object belonged. This was elegant insofar as it would produce rapid, real-time classifcations because it was concise, using only a minimal amount of processing power and being achievable very quickly. Parameterisation was presented by the computer scientists as a form of classifcation that the developing algorithmic system could manage while the system also carried out its other tasks. In this way, parameterisation would act as an initial but indefnite basis for object classifcation that could be confrmed by other system processes and even later by surveillance operatives when shown images of, for example, an apparently suspicious item of luggage. However, these parameterisations could only be adjudged as satisfactory and persuasive when they were put to use in the airport and train station. There were just too many possible mitigating issues to predict how an initial experimentation with parameterisation would turn out in practice. Initial parameterisation did at least allow the computer scientists to gain some confdence that their putative classifcations could be achieved within the bounds of processing possibility and could be achieved by making selections of relevance from streams of digital video data.

Once parameterised as putative human-shaped or luggage-shaped objects, the action states of these objects also required classifcation, for example, as moving or not moving. This involved object tracking to ascertain the action state of the objects being classifed. To achieve the conditions established in the algorithmic IF-THEN rules, the system had to identify, for example, that a putative item of luggage demarcated as potentially relevant, based on a designed model used to initiate parameterisation, was no longer moving and that a human-shaped object that was derived from a similar process, had left this luggage, had moved at least a certain distance from the luggage and for a certain time. In order to track objects that had been given an initial and hesitant classifcation, human-shaped objects and luggage-shaped objects would be given a bounding box. This was a digitally imposed stream of metadata that would create a box around the object according to its already established edges (Fig. 2.3).

The box would then be given a metadata identity according to its dimensions, location within the airport or train station (e.g. which

**Fig. 2.3** An anonymous human-shaped bounding box

camera it appeared on) and its direction and velocity. For the Event Detection algorithms of moving into a forbidden space or moving in the wrong direction (e.g. going back through airport security or going the wrong way through an entry or exit door in a rush hour train station), these bounding boxes were a concise form of identifcation. They enabled human-shaped objects to be selected with what might be a reasonable accuracy and consistency and without using too much processing effort. They were elegant, even if visually they looked a bit ugly and did little to match the actual shape of a human beyond their basic dimensions.

However, for abandoned luggage, something slightly different was required. In experimentation, in order to successfully and consistently demarcate human-shaped objects and luggage-shaped objects and their separation, a more precisely delimited boundary needed to be drawn around the putative objects. This required the creation of a pixel mask that enabled the algorithmic system to make a more precise sense of the human- and luggage-shaped objects, when and if they separated (Fig. 2.4).

This more closely cropped means to parameterise and classify objects could then be used to issue alerts within the system. IF a humanshaped object split from a luggage-shaped object, IF the human-shaped object continued to move, IF the luggage-shaped object remained stationary, IF the luggage-shaped object and human-shaped object were

**Fig. 2.4** A closecropped pixelated parameter for humanand luggage-shaped object

over a certain distance apart and IF the human-shaped object and luggage-shaped object stayed apart for a certain amount of time, THEN this would achieve the conditions under which the algorithmic system could issue an alert to operatives. As the following fgure shows, once a close-cropped image of what could be classifed as a luggage-shaped object was deemed by the system to have lingered beyond a defned time and distance from its human-shaped object, then it would be highlighted in red and sent to operatives for confrmation and, potentially, further action (Fig. 2.5).

In place of the concise elegance of the imprecise bounding box, this more precise pixel cropped form of parameterisation was computationally more demanding, requiring a little more time and more processing power. However, it was key to maintaining a set of results in these initial experimentations that satisfed the needs of the system as agreed with SkyPort and StateTrack and could be persuasive to all project partners. That is, it produced results that suggested the project was feasible and ought to continue (although as we will see in Chapter 5, problems with classifcation continued to be diffcult to resolve). The bounding boxes lacked the precision to give effect to the IF-THEN rules of the abandoned luggage algorithm.

The human-shaped object was thus accomplished in two forms—as a bounding box and a more closely cropped image. The bounding boxes although somewhat crude were central to the next stages in algorithmic experimentation in Route Reconstruction and the issuing of alerts to operatives (see Chapter 3) and deletion (see Chapter 4). For now, our algorithm could be satisfed that it had been able to participate in at least a modifed, initial and hesitant, experimental form of everyday life. It had

**Fig. 2.5** An item of abandoned luggage

not succeeded entirely in meeting all the goals of the project yet, it had struggled to initially produce a set of results that could elegantly capture suffcient information to accurately and consistently identify abandoned luggage and had to be changed (to a pixel-based process), and it was reliant on digital maps and background subtraction, but it had nonetheless started to get into the action.

## Conclusion

In this chapter, I have started to build a sense of the everyday life in which our algorithm was becoming a participant. In experimental spaces, our algorithm was starting to make a particular sense of what it means to be human and what it means to be luggage. The IF-THEN rules and the development of associated software/code, the building of a system architecture and set of components provided the grounds for rendering things like humans and luggage at stake. To return to Pollner's (1974) work (set out in the Introduction), the algorithm was starting to set out the basis for delimiting everyday life. The algorithm was beginning to insist that the stream of digital video data fowing through the system acted on behalf of an account as either luggage-shaped or human-shaped or background to be ignored. In addressing the question how do algorithms participate in everyday life, we have started to see in this chapter that they participate through technical and experimental means. Tinkering with ways to frame the human-shaped object, decide on what might count as elegant through concision, satisfaction and persuasion, are all important ways to answer this question. But we can also see that this participation is hesitant. The bounding box is quite elegant for two of the system's algorithmic processes (moving the wrong way and moving into a forbidden area) but not particularly persuasive or satisfactory for its third process (identifying abandoned luggage). And thus far, all we have seen is some initial experimentation, mostly involving the humanshaped objects of project participants. This experimentation is yet to fully escape the protected conditions of experimentation. As we will see in Chapters 4 and 5, moving into real time and real space, many of these issues in relation to algorithmic participation in everyday life have to be reopened.

It is in subsequent chapters that we will start to look into how the algorithm becomes the everyday and how algorithms can even compose the everyday. For now, these questions have been expressed in limited ways, for example when the computer scientists joked about how they would like to change the airport architecture to match the needs of the system. In subsequent chapters, as questions continue regarding the ability of the algorithm to effectively participate in everyday life, these questions resurface. In the next chapter, we will look at how the algorithmic system could become accountable. This will pick up on themes mentioned in the Introduction on transparency and accountability and will explore in greater detail the means through which the everyday life of the algorithm could be made at stake. As the project upon which this book is based was funded in order to produce a more ethical algorithmic system, these questions of accountability were vital.

### Notes


### References


Norris, C., & Armstrong, G. (1999). *The Maximum Surveillance Society: The Rise of CCTV*. London: Berg.

Pollner, M. (1974). *Mundane Reason.* Cambridge: Cambridge University Press.

Suchman, L. (2011). Subject Objects. *Feminist Theory, 12*(2), 119–145.

Taylor, E. (2010). I Spy with My Little Eye. *Sociological Review, 58*(3), 381–405.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# Accountability and the Algorithm

**Abstract** This chapter develops insights on human-shaped objects and elegance in exploring the possibility of rendering the everyday life of algorithms accountable and the form such accountability might take. Although algorithmic accountability is currently framed in terms of openness and transparency, the chapter draws on ethnographic engagements with the algorithmic system under development to show empirically the diffculties (and indeed pointlessness) of achieving this kind of openness. Rather than presenting an entirely pessimistic view, the chapter suggests that alternative forms of accountability are possible. In place of transparency, the chapter focuses on science and technology studies (STS) work that pursues the characteristics, agency, power and effect of technologies as the upshot of the network of relations within which a technology is positioned. Moving away from the idea that algorithms have fxed, essential characteristics or straightforward power or agency, opens up opportunities for developing a distinct basis of accountability in action.

**Keywords** Accountability · Agency · Power · Effects · Transparency

## Opening

Simply being able to see an algorithm in some ways displaces aspects of the drama that I noted in the Introduction to this book. If one of the major concerns with algorithms is their opacity, then being able to look at our abandoned luggage algorithm would be a step forward. However, as I have also tried to suggest thus far in this book, looking at a set of IF-THEN rules is insuffcient on its own to render an algorithm accountable. Algorithms combine with system architectures, hardware components, software/code, people, spaces, experimental protocols, results, tinkering and an array of other entities through which they take shape. Accountability for the algorithm would amount to not just seeing the rules (a limited kind of transparency) but making sense of the everyday life of the algorithm—a form of accountability in action. In this chapter, we will move on with the project on airport and train station security and the development of our algorithm to try and explore a means by which accountability might be accomplished. We will again look at the question of how an algorithm can participate in everyday life, but now with an interest in how that everyday life might be opened to account. We will also look further at how an algorithmic means to make sense of things becomes the everyday and what this means for accountability. And we will explore how algorithms don't just participate in the everyday but also compose the everyday. The chapter will begin by setting out a possible means to think through algorithmic accountability. In place of focusing on the abandoned luggage algorithm, we will look at how the algorithmic system makes sense of and composes the everyday through its User Interface and Route Reconstruction system. Then, we will consider a different form of accountability through an ethics board. The chapter will conclude with some suggestions on the everyday life of algorithmic accountability.

### Accountability

Within the project we are considering, the ethical aims put forward from the original bid onwards were to reduce the amount of visual data made visible within a video surveillance system, to reduce the amount of data that gets stored and to do so without developing new algorithms. These were positioned as a basis on which my ethnographic work could hold the system to account. They were also presented as a potential means to address popular concerns regarding questions of algorithmic openness and transparency, at least theoretically enabling the algorithm, its authorship and consequences to be called to question by those subject to algorithmic decision-making processes (James 2013; Diakopoulos 2013). A more accountable algorithm might address concerns expressed in terms of the ability of algorithms to trap us and control our lives (Spring 2011), produce new ways to undermine our privacy (Stalder and Mayer 2009) and have power, an independent agency to infuence everyday activities (Beer 2009; Lash 2007; Slavin 2011). A formal process of accountability might also help overcome the troubling opacity of algorithms, addressing Slavin's concern that: 'We're writing things we can no longer read' (2011: n.p.).

However, the social science literature provides a variety of warnings on systems and practices of accounting and accountability. For example, Power (1997) suggests in his formative audit society argument that the motifs of audit have become essential conditions for meeting the aims of regulatory programmes that problematically reorient the goals of organisations. This work draws on and feeds into neo-Foucauldian writing on governmentality (Ericson et al. 2003; Miller 1992; Miller and O'Leary 1994; Rose 1996, 1999). Here, the suggestion is made that, for example, government policies (from assessing value for money in public sector spending, through to the ranking of university research outputs) provide rationales to be internalised by those subject to accounts and accountabilities. The extent and adequacy of the take-up of these rationales then forms the basis for increasing scrutiny of the accounts offered by people or organisations in response. This sets in train a program of responsibilisation and individualisation whereby subjects are expected to deliver on the terms of the rationale, while taking on the costs of doing so, allowing 'authorities … [to] give effect to government ambitions' (Rose and Miller 1992: 175). For Foucault (1980), this provides: 'A superb formula: power exercised continuously and for what turned out to be a minimal cost' (1980: 158).

In this literature, the endurance of accounts and accountabilities is explained through the structured necessity of repetition. That is, alongside effciency, accounts and accountabilities become part of an ordered temporality of repeated assessment in, for example, performance measurements where the same organisations and processes are subject to accounts and accountabilities at set intervals in order to render the organisation assessable (Power 1997; Rose 1999; Rose and Miller 1992). For Pentland, this repetition forms audit rituals which have: 'succeeded in transforming chaos into order' (1993: 606). In particular, accounts and accountabilities have introduced a 'ritual which transforms the fnancial statements of corporate management from an inherently untrustworthy state into a form that the auditors and the public can be comfortable with' (1993: 605). Efforts to make algorithms accountable might thus need to consider the kinds of rituals these procedures could introduce, the power relations they could institute and the problematic steering of organisational goals that could result.

This literature on the formal processes and repetitive procedures for accounting and accountabilities suggests emerging calls for algorithmic accountability would provide a fertile ground for the continued expansion of accounts and accountabilities into new territories. Procedures for accountability might expand for as long as there are new organisations, technologies or audiences available, presenting new opportunities for carrying out the same processes (see, e.g., Osborne and Rose [1999] on the expansion of governmentality; also see Ferguson and Gupta [2002] on the creation of the individual as auditor of their own 'frm'). Accounts and distributions of accountability then become an expectation, something that investors, regulators and other external audiences expect to see. Being able to account for the accountability of a frm can then become part of an organisation's market positioning as transparent and open, as ethical, as taking corporate social responsibility seriously (Drew 2004; Gray 1992, 2002; Neyland 2007; Shaw and Plepinger 2001). Furthermore, as Mennicken (2010) suggests, once accountability becomes an expectation, auditors, for example, can seek to generate markets for their activities. Alternatively, the outcomes of forms of accounts and accountabilities become market-oriented assets in their own right, as is the case with media organisations promoting their league tables as one way to attract custom (such as the Financial Times MBA rankings, see Free et al. 2009). As we will see in Chapter 6, being able to promote the ethical algorithmic system as accountable became key to the project that features in this book, as a way to build a market for the technology.

This may sound somewhat foreboding: accountability becomes a ritual expectation, it steers organisational goals in problematic ways, and it opens up markets for the processes and outputs of accountability. Yet what we can also see in Chapter 2 is that what an algorithm is, what activities it participates in, and how it is entangled in the production of future effects, is subject to ongoing experimentation. Hence, building a ritual for algorithmic accountability seems somewhat distant. It seems too early, and algorithms seem too diverse, to introduce a single and universal, ritualised form of algorithmic accountability. It also seems too early to be able to predict the consequences of algorithmic accountability. A broad range of consequences could ensue from algorithmic accountability. For example, accounts and accountabilities might have unintended consequences (Strathern 2000, 2002), might need to consider the constitution of audience (Neyland and Woolgar 2002), the enabling and constraining of agency (Law 1996), what works gets done (Mouritsen et al. 2001), who and what gets hailed to account (Munro 2001), the timing and spacing of accounts (Munro 2004) and their consequence. But as yet we have no strong grounds for assessing these potential outcomes of accountability in the feld of algorithms. And calls for algorithmic accountability have thus far mostly been focused on introducing a means whereby data subjects (those potentially subjected to algorithmic decision-making and their representatives) might be notifed and be given a chance to ask questions or challenge the algorithm. An audit would also require that the somewhat messy experimentation of Chapter 2, the different needs and expectations of various different partner organisations, my own struggles to fgure out what I was doing as an ethnographer would all be frozen in time and ordered into an account. The uncertainties of experimentation would need to be ignored, and my own ongoing questions would need to be side-lined to produce the kind of order in which formal processes of accountability excel. The everyday life of the algorithm would need to be overlooked. So what should an accountable algorithm look like? How could I, as an ethnographer of a developing project, work through a means to render the emerging algorithm accountable that respected these uncertainties, forms of experimentation and ongoing changes in the system but still provided a means for potential data subjects to raise questions?

One starting point for moving from the kinds of formal processes of accountability outlined above to an approach specifcally attuned to algorithms is provided by Science and Technology Studies (STS). The recent history of anti-essentialist or post-essentialist research (Rappert 2001) in STS usefully warns us against attributing single, certain and fxed characteristics to things (and people). Furthermore, STS research on technologies, their development and messiness also suggests that we ought to maintain a deep scepticism to claims regarding the agency or power of technology to operate alone. As I suggested in the Introduction to this book, in STS work, the characteristics, agency, power and effect of technologies are often treated as the upshot of the network of relations within which a technology is positioned (Latour 1990; Law 1996). Rather than seeing agency or power as residing in the algorithm, as suggested by much of the recent algorithm literature, this STS approach would be more attuned to raising questions about the set of relations that enable an algorithm to be brought into being.

If we take accountability to mean opening up algorithms to question by data subjects and their representatives, this STS approach prompts some important challenges. We need to get close to the everyday life in which the algorithm participates in order to make sense of the relations through which it accomplishes its effects. We need to make this everyday life of the algorithm open to question. But then we also need to know something about how the algorithm is itself involved in accounting for everyday life. How can we make accountable the means through which the algorithm renders the world accountable?

The ethical aims to see less and store less data provided one basis for holding the system to account, but developing the precise method for rendering the algorithmic system accountable was to be my responsibility. Traditional approaches to ethical assessment have included consequentialist ethics (whereby the consequences of a technology, e.g., would be assessed) and deontological ethics (whereby a technology would be assessed in relation to a set of ethical principles; for a discussion, see Sandvig et al. 2013). However, these traditional approaches seemed to ft awkwardly with the STS approach and its post-essentialist warnings. To judge the consequences of the algorithm or to what extent an algorithm matched a set of deontological principles appeared to require the attribution of fxed characteristics and a fxed path of future development to the algorithm while it was still under experimentation (and, for all I knew, this might be a ceaseless experimentation, without end). As a counter to these approaches, ethnography seemed to offer an alternative. In place of any assumptions at the outset regarding the nature and normativity of algorithms, their system, the space, objects or people with whom they would interact in the project (a deontological ethics), my ethnography might provide a kind of unfolding narrative of the nature of various entities and how these might be made accountable. However, unlike a consequentialist ethics whereby the outcomes of the project could be assessed against a fxed set of principles, I instead suggested that an in-depth understanding of how the algorithms account for the world might provide an important part of accountability.

If putting in place a formal process of accountability and drawing on traditional notions of ethics were too limited for rendering the algorithm accountable, then what next? My suggestion to the project participants was that we needed to understand how the algorithm was at once a participant in everyday life and used that participation to compose accounts of everyday life. Algorithmic accountability must thus move between two registers of accountability. This frst, sense of accountability through which the algorithm might be held to account needed to be combined with a second sense of accountability through which the algorithm engages in the process of making sense of the world. I suggested we could explore this second sense of accountability through ethnomethodology.

In particular, I looked to the ethnomethodological use of the hyphenated version of the term: account-able (Garfnkel 1967; Eriksen 2002). Garfnkel suggests that "the activities whereby members produce and manage settings of organized everyday affairs are identical with members' procedures for making those settings 'account-able'" (1967: 1). For ethnomethodologists, this means that actions are observable-reportable; their character derives from the ability of other competent members to assess and make sense of actions. Importantly, making sense of actions involves the same methods as competently taking part in the action. To be account-able thus has a dual meaning of being demonstrably open to inspection as an account of some matter and being able to demonstrate competence in making sense of some matter (see Lynch 1993; Dourish 2004 for more on this). This might be a starting point for a kind of algorithmic account-ability in action.

Although ethnomethodologists have studied the account-able character of everyday conversations, they have also developed a corpus of workplace studies (Heath and Button 2002). Here, the emphasis is on the account-able character of, for example, keeping records, following instructions, justifying actions in relation to guidelines and informing others what to do and where to go (Lynch 1993: 15). For Button and Sharrock (1998), actions become organisationally account-able when they are done so that they can be seen to have been done on terms recognisable to other members within the setting as competent actions within that organisation. Extending these ideas, studying the algorithm on such terms would involve continuing our study of the work of computer scientists and others involved in the project as we started in Chapter 2, but with an orientation towards making sense of the terms of account-ability within which the algorithm comes to participate in making sense of a particular scene placed under surveillance. This is not to imply that the algorithm operates alone. Instead, I will suggest that an understanding of algorithmic account-ability can be developed by studying how the algorithmic system produces outputs that are designed to be used as part of organisational practices to make sense of a scene placed under surveillance by the algorithmic system. In this way, the human-shaped object and luggage-shaped object of Chapter 2 can be understood as part of this ongoing, account-able production of the sense of a scene in the airport or train station in which the project is based. I will refer to these sense-making practices as the account-able order of the algorithmic system. Importantly, having algorithms participate in account-ability changes the terms of the account-able order (in comparison with the way sense was made of the space prior to the introduction of the algorithmic system).

Making sense of this account-able order may still appear to be some distance from the initial concerns with accountability which I noted in the opening to this chapter, of algorithmic openness and transparency. Indeed, the ethnomethodological approach appears to be characterised by a distinct set of concerns, with ethnomethodologists interested in moment to moment sense-making, while calls for algorithmic accountability are attuned to the perceived needs of those potentially subject to actions deriving from algorithms. The accountable order of the algorithm might be attuned to the ways in which algorithms participate in making sense of (and in this process composing) everyday life. By contrast, calls for algorithmic accountability are attuned to formal processes whereby the algorithm and its consequences can be assessed. However, Suchman et al. (2002) suggest that workplace actions, for example, can involve the simultaneous interrelation of efforts to hold each other responsible for the intelligibility of our actions (account-ability) while located within constituted 'orders of accountability' (164). In this way, the account-able and the accountable, as different registers of account, might intersect. In the rest of this chapter, I will suggest that demands for an algorithm to be accountable (in the sense of being transparent and open to question by those subject to algorithmic decision-making and their representatives) might beneft from a detailed study of the account-able order of an algorithmic system under development. Being able to elucidate the terms of algorithmic participation in making sense of scenes placed under surveillance—as an account-able order—might assist in opening the algorithmic system to accountable questioning. However, for this to be realised requires practically managing the matter of intersecting different registers of account.

For the account-able to intersect with the accountable took some effort even before the project began. I proposed combining my ethnomethodologically infected ethnography of the algorithm's account-able order with a particular form of accountability—an ethics board to whom I would report and who could raise questions. The interactions through which the algorithm came to make sense of particular scenes—as an account-able order—could be presented to the ethics board so that they could raise questions on behalf of future subjects of algorithmic decision-making—a form of algorithmic accountability. As the following sections will show, intersecting registers of accounts (account-ability and accountability) did not prove straightforward. The next section of the chapter will detail efforts to engage with the account-able order of the algorithm through the User Interface and Route Reconstruction components of the system. We will then explore the intersection of account registers through the ethics board.

# Account-ability Through the User Interface and Route Reconstruction

In order for the algorithm to prove itself account-able—that is demonstrably able to participate in the production of accounts of the everyday life of the train station and the airport—required an expansion of the activities we already considered in Chapter 2. Being able to classify human-shaped and luggage-shaped objects through mapping the fxed attributes of the setting, parameterisation of the edges of objects, object classifcation, identifying the action states of objects, the production of bounding boxes or close-cropped images was also crucial to fguring out a means to participate in the production of accounts. These efforts all went into the production of alerts (a key form of algorithmic account) for operatives of the train station and airport surveillance system who could then take part in the production of further accounts. They could choose to ignore alerts they deemed irrelevant or work through an appropriate response, such as calling for security operatives in, for example, the Departure Lounge to deal with an item of luggage. At least, that was how the project team envisaged the future beyond the experimental phase of algorithmic work. Project participants from StateTrack and SkyPort who worked with the existing surveillance system on a daily basis, at this experimental stage, also more or less concurred with this envisaged future.

A future in which alerts were sent to operatives cutting down on the data that needed to be seen and cutting down on the data that needed to be stored seemed a potentially useful way forward for operatives and their managers (who were also interested in cutting down on data storage costs, see Chapters 4 and 6). But this was a cautious optimism. At this experimental stage of the project, operatives wanted to know what the alerts would look like when they received them, how would responses be issued by them, how would others in the airport receive these responses and further respond? Their everyday competences were oriented towards seeing as much as possible in as much detail as possible and reading the images for signs of what might be taking place. They mostly operated with what Garfnkel (1967) referred to as a relation of undoubted correspondence between what appeared to be happening in most images and what they took to be the unfolding action. In moments of undoubted correspondence, it was often the case that images could be ignored—they appeared to be what they were because very little was happening. However, it was those images that raised a concern—a relation of doubted correspondence between what appeared to be going on and what might unfold—that the operatives seemed to specialise in. It was these images of concern that they had to read, make sense of, order and respond to, that they would need to see translated into alerts (particularly if all other data was to remain invisible or even be deleted). How could the algorithms for abandoned luggage, moving the wrong way and entry into a forbidden area act as an experimental basis for handling this quite complex array of image competencies?

The computer scientists established an initial scheme for how the three types of relevance detection algorithm would work. The Event Detection system would sift through video frames travelling through the system and use the media proxy we met in Chapter 2 to draw together the streams of video from cameras across the surveillance network. This would use a Real-Time Streaming Protocol for MPEG4 using JSON (JavaScript Object Notifcation) as a data interchange format for system analysis. Each stream would have a metadata time stamp. The relevance detection algorithms for abandoned luggage, moving the wrong way and entering a forbidden space would then select out putative object types (using a Codebook algorithm for object detection) focusing on their dimensions, direction and speed. As we noted in Chapter 2, the system would then generate bounding boxes for objects that would then generate a stream of metadata related to the bounding box based on its dimensions and timing—how fast it moved and in what direction. This would also require a further development of the map of fxed attributes used for background subtraction in Chapter 2. Areas where entry was forbidden for most people (e.g. secure areas and train tracks) and areas where the direction of movement was sensitive (e.g. exits at peak times in busy commuter train stations) would need to be added to the maps. Producing an alert was no longer limited to identifying a human-shaped object (or luggage-shaped or any other shaped object) even though that was challenging in its own ways. The system would now have to use these putative classifcations to identify those humanshaped objects moving in the wrong direction or into the wrong space, along with those human-shaped objects that became separate from their luggage-shaped objects. Objects' action states as moving or still, for example, would be central. For the algorithms to be able to do this demonstratively within the airport and train station was crucial to being able to produce alerts and participate in account-ability.

But this didn't reduce the surveillance operatives' concerns about the form in which they would receive these alerts. Participating in account-ability was not just about producing an alert. The alerts had to accomplish what Garfnkel (1963) termed the congruence of relevances. Garfnkel suggested that any interaction involved successive turns to account-ably and demonstrably make sense of the scene in which the interactions were taking place. This required the establishment of an at least in-principle interchangeability of viewpoints—that one participant in the interaction could note what was relevant for the other participants, could make sense of what was relevant for themselves and the other participants and could assume that other participants shared some similar expectations in return. Relevances would thus become shared or congruent through the interaction. Garfnkel (1963) suggested that these were strongly adhered to, forming what he termed constitutive expectancies for the scene of the interaction. In this way, building a shared sense of the interaction, a congruence of relevances, was constitutive of the sense accomplished by the interaction.

The algorithmic system seemed to propose a future that stood in some contrast to the operatives' current ways of working. Prior to the algorithmic system, surveillance operatives' everyday competences were oriented towards working with images, other operatives, airport or train station employees, their managers and so on, in making a sense of the scene. The rich and detailed interaction successively built a sense of what it was that was going on. Constitutive expectancies seemed to be set in place. The move to limit the amount of data that was seen seemed to reduce the array of image-based cues through which accomplishing the sense of a scene could take place. Given that an ethical aim of the project was to reduce the scope of data made visible and given that this was central to the funding proposal and its success, the computer scientists needed to fnd a way to make this work. They tried to work through with the surveillance operatives how little they needed to see for the system still to be considered functional. In this way, the ethical aims of the project began to form part of the account-able order of the algorithmic system that was emerging in this experimental phase of the project. Decision-making was demonstrably organised so that it could be seen to match the ethical aims of the project at the same time as the emerging system could be constituted as a particular material-algorithmic instantiation of the ethical aims. Accomplishing this required resolution of issues for the operatives and the computer scientists of just what should be seen and how should such visibility be managed.

This required a series of decisions to be made about the User Interface. The computer scientists suggested that one way to move forward with the ethical aims of the project was to develop a User Interface with no general visual component. This was both made to make sense as a demonstrable, account-able response to the ethical aims (to reduce visibility) and constituted a visually and materially available form for these otherwise somewhat general aims. In place of the standard video surveillance bank of monitors continually displaying images, operatives would be presented only with text alerts (Fig. 3.1) produced via our algorithms' 'IF-THEN' rules. An operative would then be given the opportunity to click on a text alert and a short video of several seconds that had created the alert would appear on the operative's screen. The operative would then have the option of deciding whether the images did indeed portray an event worthy of further scrutiny or could be ignored. An operative could then tag data as relevant (and it would then be stored) or irrelevant (and it would then be deleted; see Chapter 4). The User Interface could then participate in the accomplishment of the ethical aims to see less and store less. It would also provide a means for our algorithms to become competent participants in the account-able order of interactions. The User Interface would provide the means for the algorithms to display to operatives that they were participating in the constitutive expectancies of making sense of a scene in the airport or train station. They


**Fig. 3.1** Text alerts on the user interface

were participating in establishing the shared or congruent relevance of specifc images—that an image of a human-shaped object was not just randomly selected by the algorithm, but displayed its relevance to the operative as an alert, as something to which they needed to pay attention and complete a further turn in interaction. The algorithm was displaying its competence in being a participant in everyday life.

This might seem like a big step forward for our algorithms. It might even mean a step from experimentation towards actual implementation. But the operatives and their managers swiftly complained when they were shown the User Interface: How could they maintain security if all they got to see was (e.g.) an image of an abandoned item of luggage? As I mentioned in Chapter 2, to secure the airport or train station the operatives suggested that they needed to know who had abandoned the luggage, when and where did they go? A neatly cropped image of an item of luggage with a red box around it, a human-shaped object within a bounding box that had moved into a forbidden space or been recorded moving the wrong way, was limited in its ability to take part in making sense of the scene. The algorithms' ability to take part in everyday life by participating in holding everyday life to account was questioned. As such, the emerging account-able order of the algorithmic system and the design decisions which acted as both a response to, and gave form to the project's ethical aims, were subject to ongoing development, particularly in relation to operatives' everyday competences.

This led to discussion among project participants, the computer scientists and StateTrack and SkyPort in particular, about how surveillance operatives went about making sense of, for example, abandoned luggage. Everyday competences that might otherwise never be articulated needed to be drawn to the fore here. Operatives talked of the need to know the history around an image, what happened after an item had been left, and with whom people had been associating. Computer scientists thus looked to develop the Route Reconstruction component of the system. This was a later addition to the system architecture as we saw in Chapter 2. The University 1 team of computer scientists presented a digital surveillance Route Reconstruction system they had been working on in a prior project (using a learning algorithm to generate probabilistic routes). Any person or object once tagged relevant, they suggested, could be followed backwards through the stream of video data (e.g. where had a bag come from prior to being abandoned, which human had held the bag) and forwards (e.g. once someone had dropped a bag, where did they go next). This held out the potential for the algorithms and operatives to take part in successively and account-ably building a sense for a scene. From a single image of, say, an abandoned item of luggage, the algorithm would put together histories of movements of human-shaped objects and luggage-shaped objects and future movements that occurred after an item had been left. As operatives clicked on these histories and futures around the image of abandoned luggage, both operatives and algorithms became active participants in successively building shared relevance around the image. Histories and futures could become a part of the constitutive expectancies of relations between algorithms and operatives.

Route Reconstruction would work by using the background maps of fxed attributes in the train station and airport and the ability of the system to classify human-shaped objects and place bounding boxes around them. Recording and studying the movement of human-shaped bounding boxes could be used to establish a database of popular routes human-shaped objects took through a space and the average time it took a human-shaped object to walk from one camera to another. The system would use the bounding boxes to note the dimensions, direction and speed of human-shaped objects. The Route Reconstruction system would then sift through the digital stream of video images to locate, for example, a person who had been subject to an alert and trace the route from which they were most likely to have arrived (using the database of most popular routes), how long it should have taken them to appear on a previous camera (based on their speed) and search for any

**Fig. 3.2** A probabilistic tree and children (B0 and F0 are the same images)

human-shaped objects that matched their bounding box dimensions. If unsuccessful, the system would continue to search other potential routes and sift through possible matches to send to the operatives, who could then tag those images as also relevant or irrelevant. The idea was to create what the computer scientists termed a small 'sausage' of data from among the mass of digital images. The Route Reconstruction system used probabilistic trees (Fig. 3.2), which took an initial image (of, e.g., an abandoned item of luggage and its human-shaped owner) and then presented possible 'children' of that image (based on dimensions, speed and most popular routes) until operatives were happy that they had established the route of the person and/or object in question. Probability, background maps, object classifcation and tracking became a technical means for the algorithms to participate in holding everyday life to account.

As a result of operatives' articulation of a potential clash between an ethical aim of the project (to reduce visibility) and the everyday competences of surveillance operatives (to secure a train station or airport space through comprehensive visibility), the account-able order of work between computer scientists, end-users, their working practices and the User Interface shifted somewhat to incorporate the new Route Reconstruction component. Route Reconstruction became a basis for account-ably acknowledging the existing competences of operatives in securing a space. The small 'sausages' of data and probabilistic 'children' became a means of broadening the number of participants in account-ably accomplishing a sense of the everyday life of the train station and airport. Yet having 'sausages' of data and new forms of metadata (used to produce 'children') might initially appear to move the project away from its stated ethical aim to reduce the amount of surveillance data made visible—this became an issue for questions of accountability asked on behalf of future subjects of algorithmic decision-making, as we will see.

At this point (at least for a time), it seemed that I was in a position to make an ethnographic sense of the account-able order of the algorithmic system that would avoid an overly simplifed snapshot. In place of a static audit of the system was an account of an emerging order in which terabytes of visual, video data would be sifted by relevance detection algorithms, using background subtraction models to select out proto-relevant human-shaped and other objects. These would be further classifed through specifc action states (abandoning luggage, moving the wrong way, moving into a forbidden space) that could be the basis for an alert. Operatives would then have the responsibility to decide on future courses of action as a result of the alerts they were sent (e.g. alerting airport security staff to items of luggage). The alerts were the frst means through which the algorithmic system could participate in the account-able order of the scene placed under surveillance. Subsequent operative responses could also shift responsibility for a second form of account-able action back onto the algorithmic system if Route Reconstruction was deemed necessary, with probabilistic trees and children designed to offer images of proto-past and subsequent actions (once again to be deemed relevant by operatives). Through this account-able order, the algorithmic system was involved in making sense of the everyday life of particular spaces, such as an airport or train station, and held out the possibility of contributing to changes in operatives' everyday competences in securing those spaces. The presence of the algorithmic system proposed notable changes in the operatives' activities. Instead of engaging with large amounts of video data in order to make decisions, operatives would only be presented with a small amount of data to which their responses were also limited. Conversely, for the algorithmic system to work, far greater amounts of data were required prior to the system operating (e.g. digitally mapping the fxed attributes of a setting such as an airport and fxing in place parameters for objects such as luggage and humans, producing bounding boxes, metadata, tracking movements, constituting a database of popular routes). The introduction of the algorithmic system also seemed to require a much more precise defnition of the account-able order of airport and train station surveillance activities. The form that the order took was both oriented to the project's ethical aims and gave a specifc form to those aims. Yet this emerging form was also a concern for questions of accountability being asked on behalf of future data subjects—those who might be held to account by the newly emerging algorithmic system.

The specifc material forms that were given to the project's ethical aims—such as the User Interface and Route Reconstruction system were beginning to intersect with accountability questions being raised by the ethics board. In particular, how could this mass of new data being produced ever meet the ethical aim to reduce data or the ethical aim to not develop new surveillance algorithms? In the next section, I will explore the challenges involved in this intersection of distinct registers of account by engaging with the work of the ethics board.

# Account-ability and Accountability Through the Ethics Board

As I suggested in the opening to this chapter, formal means of accountability are not without their concerns. Unexpected consequences, rituals, the building of new assets are among an array of issues with which accountability can become entangled. In the algorithm project, the key entanglement was between the kinds of account-ability that we have seen developing in this chapter, through which the algorithms began to participate more thoroughly in everyday life, and accountability involving questions asked on behalf of future data subjects—those who might be subject to algorithmic decision-making. This latter approach to accountability derived from a series of expectations established in the initial project bid, among project partners and funders that somehow and in some way the ethical aims of the project required an organised form of assessment. This expectation derived partly from funding protocols that place a strong emphasis on research ethics, the promises of the original funding proposal to develop an ethical system, and a growing sense among project participants that an ethical, accountable, algorithmic surveillance system might be a key selling point (see Chapter 6). This signalled a broadening in the register of accounts, from the algorithms participating in account-ability to the algorithms being subjected to accountability.

The ethics board became the key means for managing the accountable and the account-able. It was not the case that the project could simply switch from one form of account to another or that one took precedence over the other. Instead, the project—and in particular me, as I was responsible for assessing the ethics of the emerging technology—had to fnd a way to bring the forms of account together. The ethics board was central to this as it provided a location where I could present the account-able order of the algorithmic surveillance system and provoke accountable questions of the algorithms. The ethics board comprised a Member of the European Parliament (MEP) working on redrafting the EU Data Protection Regulation, two national Data Protection Authorities (DPAs), two privacy academics and two members of privacy-focused civil liberty groups. The ethics board met three times during the course of the project, and during these meetings, I presented my developing study of the account-able order of the algorithmic system. I presented the ways in which the algorithmic system was involved in making sense of spaces like an airport and a train station, how it was expected to work with operatives' everyday competences for securing those spaces and how the system gave form to the project's ethical aims. In place of buying into the claims made on behalf of algorithms by other members of the project team or in popular and academic discussions of algorithms, I could present the account-able order as a more or less enduring, but also at times precarious focus for action. In response, members of the ethics board used my presentations along with demonstrations of the technology to open up the algorithmic system to a different form of accountability by raising questions to be included in public project reports and fed back into the ongoing project.

Ethics board members drew on my presentations of the account-able order of the algorithmic system to orient their questions. In the frst ethics board meeting (held approximately ten months into the project), one of the privacy-focused academics pointed to the centrality of my presentation for their considerations:

From a social scientist perspective it is not enough to have just an abstract account for ethical consideration. A closer understanding can be brought about by [my presentation's] further insight into how [the system] will work.

The way the system 'will work'—its means of making sense of the space of the airport and train station—encouraged a number of questions from the ethics board, enabling the system to be held accountable. For example, the Data Protection Offcers involved in the board asked during the frst meeting:

Is there a lot of prior data needed for this system? More so than before? Are people profled within the system?

How long will the system hold someone's features as identifable to them as a tagged suspect?

These questions drew attention to matters of concern that could be taken back to the project team and publicly reported (in the minutes of the ethics board) and subsequently formed the basis for response and further discussion at the second ethics board meeting. The questions could provide a set of terms for making the algorithmic system accountable through being made available (in public reports) for broader consideration. The questions could also be made part of the account-able order of the algorithmic system, with design decisions taken on the basis of questions raised. In this way, the computer scientists could ensure that there was no mechanism for loading prior data into the system (such as a person's dimensions, which might lead to them being tracked), and to ensure that metadata (such as the dimensions of human-shaped objects) were deleted along with video data to stop individual profles being created or to stop 'suspects' from being tagged. Data Protection Offcers sought to 'use the committee meetings to clearly shape the project to these serious considerations.' The 'serious considerations' here were the ethical aims. One of the representatives of the civil liberties groups also sought to utilise the access offered by the ethics board meetings but in a different way, noting that 'As systems become more invisible it becomes more diffcult to fnd legitimate forms of resistance.'

To 'shape the project' and 'fnd legitimate forms of resistance' through the project seemed to confrm the utility of intersecting account-ability and accountability, opening up distinct ways for the system to be questioned and for that questioning to be communicated to further interested audiences. However, as the project progressed, a series of issues emerged that complicated my presentation of the account-able order of the algorithmic system to the ethics board and hence made the intersection of account-ability and accountability more diffcult.

For example, I reported to the ethics board a series of issues involved in system development. This included a presentation of the challenges involved in 'dropping in' existing algorithms. Although one of the project's opening ethical aims was that no new algorithms would be developed and that existing algorithms could be 'dropped into' existing surveillance networks, these were also termed 'learning' algorithms. I presented to the ethics board an acknowledgement from both teams of computer scientists that the algorithms needed to 'learn' to operate in the end-user settings; that algorithms for relevancy detection and the Route Reconstruction component had to run through streams of video data; that problems in detecting objects and movements had to be continually reviewed; and that this took '10s of hours.' When problems arose in relation to the lighting in some areas of end-user sites (where, e.g., the glare from shiny airport foors appeared to baffe our abandoned luggage algorithm which kept constituting the glare as abandoned luggage), the code/software tied to the relevancy detection algorithm had to be developed—this I suggested to the ethics board is what constituted 'learning.'

These ongoing changes to the system through 'learning' emphasised the complexities of making sense of the algorithmic system's account-able order; the way the system went about making sense changed frequently at times as it was experimented with and my reporting to the ethics board needed to manage and incorporate these changes. Alongside the continual development of 'learning' algorithms, other issues that emerged as the system developed included an initial phase of experimentation where none of the system components would interact. In this instance, it turned out that one of the project members was using obsolete protocols (based on VAPIX), which other project members could not use or did not want to use. Attempting to resolve this issue took 114 e-mails and four lengthy telephone conference calls in one month of the project. Other issues that emerged included: questions of data quality, frame rates, trade union concerns, pixilation and compression of video streams, which each led to changes in the ways in which the system would work. In particularly frenzied periods of project activity, I found it more challenging to maintain a clear notion of what constituted the 'order' of the algorithmic system to report to the ethics board, as major features (e.g. which components of the system talked to each other) would be changed in quite fundamental ways. When the Route Reconstruction and Privacy Enhancement components of the system were also brought together with the relevancy detection algorithms, reporting became more diffcult again.

The ongoing changes of system development emphasised the value of building an understanding of the system's developing account-able order. Making sense of the way in which the algorithmic system (its components, design decisions, designers, software, instructions and so on) was involved in making sense of the train station and airport, avoided providing a more or less certain account developed from a single or brief timeframe that simply captured and replayed moments of system activity, as if the system had a singular, essential characteristic. Instead, understanding the account-able order held out the promise of making sense of the ordering practices of the system under development, how algorithms went about making sense of and participating in everyday life. In the absence of such an approach to algorithms, the risk would be that multiple assumptions (that might be wrong or only correct for a short time) regarding the nature of algorithms were set in place and formed the basis for accountability.

Tracing system developments and the changing account-able order of the algorithmic system for presentation to the ethics board also became the principal means of intersecting the different registers of account-ability and accountability. In place of presenting a static picture of the algorithmic system, changes in the ordering activities of the system could be demonstrated and discussed in relation to the project's ethical aims. This was particularly important in ethics board meetings as changes that emerged through system development appeared to change the specifc form given to the project's ethical aims. For example, as the project developed, a question for the ethics board was how far could an algorithm 'learn' and be changed before it was considered suffciently 'new' to challenge the ethical aim of the project to not introduce new algorithms? Furthermore, how much new data from bounding boxes, object classifcation and action states could be produced before it challenged the ethical principle to reduce data? This intersection of account-ability and accountability was not resolved in any particular moment, but became a focal point for my ethics board presentations and ensuing discussions and public reporting.

However, as the project and ethics board meetings progressed, my role in producing accounts became more diffcult. I was involved in making available an analysis of the account-able order of the system partly as a means to open the system to questions of accountability, which I would then publicly report and feed back to project members. At the same time, I was not just creating an intersection between account-ability and accountability, I risked being deeply involved in producing versions of the system's account-able order which might steer ethics board members towards recognising that the system had achieved or failed to achieve its ethical aims and thus defuse or exacerbate accountability concerns. I was the algorithm's proxy, mediating its ability to grasp everyday life through my ability to grasp the details of its abilities. As one of the Data Protection Offcers on the ethics board asked, 'What is Daniel's role? How can he ensure he remains impartial?'

Rather than try to resolve this problem in a single ethics board meeting, I sought instead to turn this issue of my own accountability into a productive tension by bringing as much as possible to the ethics board. My own developing account of the account-able order of the algorithmic system, the computer scientists, end-users and the technology as it developed could all be drawn together in ethics board meetings. The result was not a single, agreed upon expert view on the system. In place of a single account, the meetings became moments for different views, evidence, material practices and so on to be worked through. The effect was to intersect account-ability and accountability in a way that enabled questions and attributions of algorithmic responsibility and openness to be brought into the meetings and discussed with ethics board members, computer scientists, the system and my own work and role in the project. Accountability was not accomplished in a single moment, by a single person, but instead was distributed among project members and the ethics board and across ongoing activities, with questions taken back to the project team between meetings and even to be carried forward into future projects after the fnal ethics board meeting. And the intersection of account-ability and accountability was not simply a bringing together of different registers of account, as if two different forms of account could, for example, sit comfortably together on the same page in a report to the ethics board. The intersecting of account-ability and accountability itself became a productive part of this activity, with questions of accountability (e.g. how much has changed in these algorithms?) challenging the account-able order of the algorithmic system and the more or less orderly sense-making practices of the algorithmic system being used to draw up more precise questions of accountability. The algorithms' means to participate in the account-ability of everyday life in the airport became the means to make the algorithms available to this different sense of accountability through the ethics board.

### Conclusion

In this chapter, we can see that our algorithms are beginning to participate in everyday life in more detailed ways. They are not only classifying putative human-shaped and luggage-shaped objects. They are also taking part in the production of accounts that make sense of the actions in which those objects are also taking part: being abandoned, moving the wrong way, moving into a forbidden space. This participation in the account-able order of everyday life is an achievement based on years of work by the computer scientists and signifcant efforts in the project to work with operatives to fgure out their competences and how a system might be built that respects and augments these competences while also accomplishing the project's ethical aims. Such aims were also the key grounds for intersecting this increasing participation in the account-ability of everyday life with the sense of accountability pursued by the ethics board. Regular meetings, minutes, publicly available reports, the development of questions into design protocols for the emerging system, creating new bases for experimentation, each formed ways in which accountability could take shape—as a series of questions asked on behalf of future data subjects. In a similar manner to the literature that opened this chapter, this more formal process of accountability came with its own issues. Unanticipated questions arose, the system being subjected to account kept changing, some things didn't work for a time, and my own role in accountability came under scrutiny. In place of any counter expectation that algorithms could be made accountable in any straightforward, routine manner, came this series of questions and challenges.

What, then, can be said about future considerations of algorithms and questions of accountability? First, it seemed useful in this project to engage in detail with the account-able order of the algorithmic system. This displaced a formal approach to accountability, for example, carrying out an audit of algorithmic activity, with an in-depth account of the sense-making activities of the system. Second, however, this approach to account-ability did nothing on its own to address questions of accountability—what the concerns might be of future data subjects. Intersecting different registers of account through the ethics board was itself a signifcant project task and required resources, time and effort. Third, the intersection of account-ability and accountability was productive (raising new questions for the project to take on), but also challenging (requiring careful consideration of the means through which different views could be managed). With growing calls for algorithmic systems to be accountable, open to scrutiny and open to challenge, these three areas of activity set out one possible means for future engagement, intersecting the account-able and the accountable and managing the consequences.

But the challenges for our algorithms did not end here. Although we now fnish this chapter with a sense that our algorithms are grasping everyday life in more detail, are more fully participating in everyday life through forms of account-ability and are even beginning to shape everyday life by causing the operatives to reconsider their everyday competences, there is still some way to go. The algorithms have only reached an initial experimental stage. Next, they need to be tested in real time, in real places. They need to prove that they can become everyday. The system components need to prove to the world that they can interact. The Privacy Enhancement System needs to show that it can select and delete relevant data. As we will see in the next chapter, deletion is not straightforward. And as we will see subsequently, real-time testing (Chapter 5) is so challenging, that the possibility of building a market value for the technology needs to be re-thought (Chapter 6). But then everyday life is never easy.

### References


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# The Deleting Machine and Its Discontents

**Abstract** Deletion was a central component of the algorithmic system studied in this book. Deletion is also a key motif of contemporary data management: concepts such as proportionality, necessity, a shelf-life for data, right to be forgotten or right to erasure and specifc defnitions of privacy all relate to deletion. In this chapter, the calculative basis for deletion will be used to provide insight into not just the content of an algorithm, but its everyday composition, effects and associated expectations. However, the chapter suggests that deletion also poses a particular kind of problem: the creation of nothing (the deleted) needs to be continually proven. These focal points and the diffculties of providing proof are used to address suggestions in contemporary research that algorithms are powerful and agential, easily able to enact and execute orders. Instead, the chapter calls for more detailed analysis of what constitutes algorithmic success and failure.

**Keywords** Deletion · Proof · Calculation · Success and failure

## Opening

In Chapter 3, I suggested that our algorithms had begun to participate in everyday life by becoming involved in establishing the account-able order of life in the airport and train station. I also suggested that this form of account-ability intersected with concerns of

© The Author(s) 2019 D. Neyland, *The Everyday Life of an Algorithm*, https://doi.org/10.1007/978-3-030-00578-8\_4 accountability, in particular in relation to the project's ethical aims and the possibility of future data subjects being able to question the algorithm. You will recall that the project was funded in order to develop an algorithmic system that would reduce the amount of visual video data seen within a surveillance system and stored within such systems, without developing new algorithms. As we saw in the last chapter, the extent to which these ethical aims were achieved was not straightforward to assess as the system went through various forms of experimentation and change, the ethics board set up to hold the system to account raised new questions as the system developed and my own understanding of the system grew over time. One aspect of this unfolding experimentation and accountability that never disappeared even as the project moved towards more thorough system testing in the train station and airport was the focus on storing less data.

We have already seen that it took a great deal of effort to utilise algorithms to select out such matters as human-shaped and luggage-shaped objects and then to deem these relevant as, for example, abandoned luggage and issue an alert to operatives. What we have not seen yet is the struggle to delete the vast majority of data deemed irrelevant. Computer scientists from Universities 1 and 2 spent some time in meetings taking project members through conventions for deletion. Most forms of deletion, it turned out, either left a trace of the original from which data might be extracted or simply changed the route through which a user might connect to data (meaning the data itself would potentially be retrievable). The computer scientists, the consulting frm coordinating the project, the ethics board, StateTrack and SkyPort entered into discussion of what might provide an adequate form of deletion. For the computer scientists, changing the route for accessing information was an elegant solution (see Chapter 2 for more on elegance): it was an available, standard practice, and it would satisfy the project's ethical aims to the extent that it would match most other forms of deletion. However, the project coordinators sought a more thorough form of deletion—they were already looking to the potential market value (see Chapter 6) of a deleting machine. Expunging data from the system, overwriting data and corrupting data were all suggested as more thorough forms of deletion. Hence the algorithms would not just participate in making sense of everyday life, start to produce outputs that would subtly compose everyday life and become a feature of the everyday (on this, see Chapters 5 and 6): the algorithms would also start to delete, remove and reduce everyday life. The elegant and accountable algorithms would be attuned to this negation of the everyday. The chapter begins by considering a means for grasping the everyday life of algorithmic deletion. It then looks at the deletion system in action to consider what it takes for our algorithms to be successful in not just identifying relevance but deleting irrelevance. The chapter concludes with some of the concerns that began to arise with deletion in the project and the future this portends.

### Deletion and the Algorithm

Deletion and accountability are not only ethical aims of this project. Within the European Union, there has been a twin policy response to issues of algorithms and privacy through the right to be forgotten combined with a right to accountability. Current policy developments in Brussels anticipate that algorithms will be prevented from amassing and analysing data at will, through clear limitations on what data can be collected, how it can be used, how long it will be stored and the means of deletion. These principles will also be made accountable, even if it is not always clear what form that accountability will take. For the coordinators of the project that features in this book, deletion, as one part of an ethical and accountable algorithmic system, might provide a means to respond to these policy demands. The ethical aims might have market value.

These policy moves emerged through the complex and lengthy political developments that take place in the EU. The move to articulate and institute a 'right to be forgotten' or 'right to erasure' has been a feature of the revision of the EU Data Protection Directive (Directive 95/46/ EC) into the new EU General Data Protection Regulation. As Bernal highlights, the right has become defned as 'the right of individuals to have their data no longer processed, and deleted when they are no longer needed for legitimate purposes' (2011: n.p.). This sits alongside a move to establish a basis for accountability. The EU Article 29 Working Party on Data Protection has issued an Accountability Principle which sets out a provision: 'to ensure that the principles and obligations set out in the [Data Protection] Directive [now a Regulation] are complied with and to demonstrate so to supervisory authorities upon request' (2010: 2). In this way, the principle of accountability is designed to ensure a transition from Data Protection in theory to practice and to provide the means to assess that this shift has adequately taken place.

Within the development of the new European General Data Protection Regulation, these two moves have become combined such that to delete, must also become an accountable feature of activities; organisations must be able to demonstrably prove they have taken on responsibility for deletion and removed 'our' data. Although discussions of the Article 29 Working Party Accountability Principle and the proposed and critiqued revisions of the EU General Data Protection Act have been mostly focused on online data, these policy moves have also spurred broader concerns with data repositories and data analysis and the posited need for erasure. For example, erasure, forgetting and accountability have become key reference points in the development of what have become termed Privacy Enhancing Technologies (PETs) and Privacy by Design projects (see Goold 2009). Here the remit for data storage and analysis is not restricted to online data but also incorporates concerns with the kinds of video-based data that our algorithms specialise in. The premise of these arguments for PETs is that all algorithmic technologies ought to take privacy concerns into account. In these discussions, privacy is often understood in more or less straightforward binary terms. For example, it is proposed that if one's data no longer exists, there is no risk to one's privacy. One type of emerging PET within this feld is autodeletion technologies (also see Mayer-Schonberger 2009). To delete and to accountably demonstrate that deletion has taken place appears to be an emerging benchmark for policy compliance. For the coordinators of the project, being able to set such a benchmark through the emerging system would be a step towards market launch (see Chapter 6). It was not only the ethical aims of the project that were at stake in developing a means to delete but the future market viability of the technology. Regulatory compliance could be sold on the open market.

The coordinator's search for a means to go beyond a conventional approach to deletion which involves simply changing the connections through which a user might access data was part of this preparatory market work. The conventional approach to deleting supported by the computer scientists, was unlikely to fulfl the proposed terms of policy mechanisms such as the revised EU General Data Protection Regulation or the concerns articulated in the literature on PETs and Privacy by Design. The concern articulated as prompting the right to be forgotten/right to erasure is couched in terms of a need to expunge data from a repository, making it impossible to link, scrape, share or make further uses of that data; it is argued that to simply change the route via which information is retrieved can be overcome with little effort and reopens the data to all future uses. And the Article 29 Working Party accountability principle requires that compliance with such expunging is made clearly and demonstrably available. Deletion then sits centrally within the development of our algorithms. To be able to select out images and send them to operatives as alerts was a technical achievement, but to competently delete would require selecting out all data that did not need to be sent to operatives and developing a system for its removal.

This is a challenging basis for research for an ethnographic social scientist. The very thing being studied is always and already in the processes of becoming nothing. It is a double negation: data that has been deemed *irrelevant* is the thing that we need to study in this chapter and data that has been deemed irrelevant needs to be studied because it will be *deleted*. Studying irrelevance heading towards digital oblivion seems a challenge. In practice both data and deletion can be traced up to a point, but then (at least in theory) it should be gone. How can we grasp this partial and momentary thing—the action of deletion—along with this stuff that is here for a time and then goes—the irrelevant data?

One way forward is to return to the detail of the algorithms. If we can make sense of how the algorithms participate in the selection of things that are irrelevant and to be deleted and we can then fgure out how those things are deleted (or as it turns out, not very well deleted), that might be a start. One way to work through the complexities of deletion is to make sense of what it is: a system for turning the complexities and uncertainties of the everyday into a basis for calculating and dividing relevance from irrelevance. As Callon and Muniesa (2005) suggest on calculating:

A calculative agency will be all the more powerful when it is able to: a) establish a long, yet fnite list of diverse entities; b) allow rich and varied relations between the entities thus selected, so that the space of possible classifcations and reclassifcations is largely open; c) formalize procedures and algorithms likely to multiply the possible hierarchies and classifcations between these entities. As this calculative power depends on the equipments that agencies can rely upon, we can easily understand why it is unevenly distributed among them. (1238)

We can think of our algorithms on these terms: they establish a fnite list of entities (human-shaped objects, luggage-shaped objects, bounding boxes and close-cropped images), entered into varied relations (object action states such as moving the wrong way or abandoned), of possible hierarchies (particularly with the coordinators' interest in selling the technology in the future, see Chapter 6). That the algorithms will be the entities responsible for imposing this hierarchy of relevance on everyday life, suggests they will play a key part in the formulation of this initial step towards deletion, among a complex array of relations also involving other system components, the spaces in which the system operates and so on.

This notion of calculative agency builds on a history of STS work on calculation. This includes studies of how accuracy is constructed (MacKenzie 1993), the accomplishment of numeric objectivity (Porter 1995), trading, exchange and notions of equivalence (Espeland and Sauder 2007; MacKenzie 2009), among many other areas. The kinds of concern articulated in these works is not focused on numbers as an isolated output of calculation. Instead, numbers are considered as part of a series of practical actions involved in, for example, solving a problem (Livingston 2006), distributing resources, accountabilities or responsibilities for action (Strathern 2002), governing a country (Mitchell 2002) and ascertaining a value for some matter (Espeland and Sauder 2007; MacKenzie 2009). Taking on these ideas, we can say that our algorithms are not only involved in classifying human-shaped and other objects and their action states, but also their relevance and irrelevance. The algorithms are involved in producing both quantities (a number of alerts, a complex means to parameterise visual data, the production of metadata and bounding boxes) and qualities (issuing or not issuing an alert, deciding between relevance and irrelevance). This is the starting point for the neologism of qualculation (Cochoy 2002; Thrift 2004). For Callon and Law:

Qualculation implies qualifcation. Things have to qualify before they can enter a process of qualculation … this can be … done in an endless number of ways. With an endless range of mechanisms and devices. (2005: 715)

The work of qualculation, they suggest, operates in three parts:

First, the relevant entities are sorted out, detached, and displayed within a single space. Note that the space may come in a wide variety of forms or shapes: a sheet of paper, a spreadsheet, a supermarket shelf, or a court of law – all of these and many more are possibilities. Second, those entities are manipulated and transformed. Relations are created between them, again in a range of forms and shapes: movements up and down lines; from one place to another; scrolling; pushing a trolley; summing up the evidence. And, third, a result is extracted. A new entity is produced. A ranking, a sum, a decision. A judgment. … And this new entity corresponds precisely to – is nothing other than – the relations and manipulations that have been performed along the way. (2005: 715)

Detachment, forging of new relations and the production of a judged result provides an initial analytic focus for studying the combined practices of quantifcation and qualifcation. These forms of qualculation can be seen at work in recent discussions of algorithms in academic work. Google search engines (Gillespie 2013) and academic plagiarism software (Introna 2013) suggest that algorithms are involved in the production of combined qualities and quantities in producing results. Taking plagiarism software as an example, we can see that such software would produce an algorithmic qualculation by detaching strings of characters (words, sentences and so on), forging new relations between those characters and other entities (by searching for similar or identical strings of characters in the world of published texts beyond the string) and producing a qualculative result; a basis for judging the similarity and distinctiveness of, for example, a student essay and already published texts. The algorithmic qualculation studied by Introna is a commercial product sold to Universities, which uses detachment, forging of new relations and the production of a result to generate a judgement of the students most likely to have plagiarised their essays.

This provides some starting points for thinking through the ways in which our algorithms are involved in the production of outputs—deciding relevance and irrelevance and sending alerts to operatives—that are qualculative. They set out a means to detach data, forge new relations and produce a judged result. This gives us a means to move on from our concerns in Chapters 2 and 3, focused on the means to classify and render those classifcations accountable. But it is only one step forward: it alerts us to the importance of the algorithmic output (relevance or irrelevance), not yet what happens to that output. We need to discover a means to move from qualculation and the production of something—a judgement, a demarcation of relevance, an alert—to nothing—the deletion of irrelevance.

One starting point for augmenting the notion of qualculation by taking something and nothing into account is provided by the work of Hetherington and Lee (2000) on zero. They suggest that zero was introduced into western European mathematics and economics in approximately the fourteenth century. Zero provided the basis for a numeric logic of order at the same time as disrupting conventions for ordering, disrupting by connecting otherwise unconnected entities (nothing and the progressive accumulation of something from the number one upwards; as well as at a later date, providing the basis for counting downwards with the introduction of negative numbers to Europe from around the seventeenth century) and came to be seen as generating a new order. This despite zero itself being an underdetermined fgure, both a sign on its own (signifying something of no value) and a metasign of order (providing for the signifcance of subsequent numbers or indicating rank in the decimal system). Hetherington and Lee suggest that: 'What [zero] reveals… is that very basic mathematical ordering practices are themselves dependent on a fgure that refuses to adopt a singular position in their semiotic order' (177). Following on from this, we might think of our emerging algorithmic system for deletion not just as a focus for qualculation (doing something), but as a system that refuses to occupy a singular position (both something and nothing, doing and undoing data and its relations).

However, Hetherington and Lee (2000) go further and suggest that zero, as something and nothing, can also be considered a blank fgure, something that: 'hybridises presence and absence rather than two forms of different presence' (175). Following from this, an intervention in an order—such as the introduction of zero—can be considered a blank fgure when its nature is underdetermined, uncertain, unclear, troubling, provokes tension and generates not just a connection between preexisting entities, but provides a basis for further investigation of those entities now connected. In this way, an algorithmic system might introduce an accountable nothing (the deletion of data) that would not just create (or remove) connections between entities, but also create new troubling questions (e.g. regarding the extent or adequacy or consequences of deletion). Whereas studies of qualculation appear to depend on the emergence of a result from a singular order ('a result is extracted'), the blank fgure suggests a more persistent instability or multiplicity of order.

In this way, the work of Hetherington and Lee sensitises us to the possibility of disruptions to conventions of order through simultaneous somethings and nothings; zero which provides a basis for reordering something (the rules and conventions for order such as negative numbers) and for considering nothing (a more literal zero). Following this argument, to introduce accountable deletion might be to generate instability and questions as much as order. The nature of data, of algorithms and their associations might be called into question, and so might the relations that generated the call for accountability in the frst place. Instead of the algorithmic drama in current academic research that I noted in the Introduction and Chapter 2, we might have nothing (deletion), but we might also have a generative something (new accountability relations through which the deletion is demonstrated alongside diffcult questions regarding what constitutes adequate deletion). The generative dissonance or profound change in ordering provoked by the blank fgure—the something and nothing—as we shall see, attains a brutish presence: its adequacy as both something and nothing is diffcult to pin down and yet vital to the marketable future of the technology under development.

The suggestion in policy discussions around deletion and accountably accomplishing deletion are that in some way an algorithm can be limited (even through another algorithm). Yet taking on board the work of Cochoy, Callon, Law, Hetherington and Lee suggests that when a new qualculative form is constituted and inserted into sociomaterial relations, it can constitute a something and nothing, a disruption and form of disorder, a set of questions and not only a limitation. The production of something and nothing and its accountable accomplishment clearly requires detailed investigation. This chapter will now begin this investigation particularly attuned to the possibility that deletion might generate blank fgures, disorder as well as order. Attempts to accountably demonstrate that nothing has been created from something will be pursued, wherein I will suggest that the certainties of qualculation become overwhelmed by the disruptive fgure of what might constitute deletion.

### Deletion and the Challenges of Nothing

Deletion had become a notable cause for concern in policy debates in the European Union (set out above) and in academic literature that describes deletion as a solution to the 'pernicious' features of 'comprehensive digital memory' (Mayer-Schonberger 2009: 11). For the project coordinators, deletion was a means to respond to these concerns and maybe corner the market for accountably, ethically and algorithmically deleting data. Firms that needed to respond to new policy requirements might after all need a deletion system. But deletion had also become a cause for concern within the project. The computer scientists' interest in a conventional form of deletion that was not particularly secure or complete, but was straightforward, stood in contrast to the views expressed by the project coordinators and the ethics board who for different reasons wanted a more thorough-going form of deletion. Should deletion simply involve changing the route by which data was accessed, should it involve expunging data from the system, corrupting or overwriting data?

These questions responded to the project's ethical aims in different ways, required different amounts of effort, budget and expertise and might provide different ways to make sense of the technology's market potential (see Chapter 6). These concerns were not easy to separate. As the project moved beyond the experimental phase that we saw in Chapters 2 and 3, towards a more fully operational system that would be tested live in the train station and airport, a decision was required on what ought to constitute deletion. The consultancy frm that coordinated the project decided, with ethics board support, to pursue the development of a comprehensive, but complex deletion system. Eventually, this would involve using solid-state drives for data storage with data then overwritten by an automated system, making it more or less irretrievable. To begin with, however, solid-state technology was not available to the project and the means to automatically overwrite data was not yet developed in a way that would work on the project's system architecture. Moreover, the system had to also demonstrate that it could successfully demarcate relevant from irrelevant data in order that the irrelevant data could be overwritten. And other data which had been tagged 'relevant' once it was no longer needed and metadata (such as timestamps and bounding box dimensions) would also need to be deleted. And not just deleted, but demonstrably and accountably deleted so that various audiences could be shown that deletion had taken place and that the system worked. TechFirm, a large IT network provider who were a partner in the project, had taken on the task of ensuring that the deletion system would be accountable. The complexity of deletion did not end here: discussions continued around how quickly data should be deleted. Just how long should data be stored, what was the correct ethical and practical duration for data storage? Operatives might need to do Route Reconstruction sometime after an alert had been issued, but ethical demands suggested such storage times should be limited. As a feature of the emerging technology under test conditions, 24 hours was initially set as a data storage period that responded to ethical and emerging policy imperatives and the practical requirements of operatives.

These were each signifcant challenges in software and hardware, but also conceptually and ethically. This was not simply about producing nothing—the deleted. Instead it involved the continual and simultaneous production of nothing and something—the deleted, an account that could demonstrably attest that deletion had taken place, a new benchmark for deletion, a new system that could take on all the requirements of end-users oriented towards data retention at the same time as satisfying the ethics board and newly emerging regulations that data would not be stored. It was through this array of questions and concerns that deletion became a blank fgure, both something and nothing, a troubling and disruptive fgure within the project.

As the project moved out of its experimental phase, our algorithms and their IF-THEN rules would need to provide the basis for demarcating relevance from irrelevance with a level of confdence that would enable deletion to take place (although as we will see in Chapter 5, this was in itself a challenge). As I suggested in previous chapters, the Event Detection algorithms for moving the wrong way, moving into a forbidden space and abandoned luggage were also termed relevancy detection algorithms. In order to decide what ought to be deleted, these algorithms would need to sift through streams of digital video data streamed from the airport and train station video surveillance system, via the system's Media Proxy that we noted in Chapter 2 (to smooth out any inconsistencies). This should make available somewhere between 1 and 5% of data to operatives of the surveillance system through the User Interface as images that they might need to look at more closely. The Route Reconstruction system we saw in Chapter 3 might expand on these amounts of relevant data a little by creating 'sausages' of data around an image, constructing the history and future around a specifc image selected by the algorithms. Still the technology ought to be able to select out huge amounts of irrelevant data for deletion. Even data that appeared to be initially relevant and was shown on the User Interface to operatives of the surveillance system and Route Reconstruction data would only be kept for a short time until reviewed by operatives who could also declare the images irrelevant and send them for deletion.

At the end of the experimental phase of the project, it might seem farfetched to describe deletion as a disorderly and disruptive blank fgure based on complex qualculations of quantities and qualities. Relevant data could be checked and then deleted. Irrelevant data, by default, would be all the other data. This apparent certainty, at least at this stage of the project, extended through the algorithmic system. The IF-THEN rules were clear, the maps of the fxed attributes of the experimental settings were clear, the models for object classifcation and the action states of objects as worthy of further scrutiny all seemed clear. The quantities involved were signifcant—terabytes of digital video data—but the qualities—mostly operatives clicking on text alerts and watching short videos, were neatly contained. Following Callon and Law (2005), we could say that this was the frst step towards a straightforward form of qualculation. Things were separated out and disentangled such that they might be recombined in a single space (within the algorithmic system). The background subtraction technique that we saw in Chapter 2 provided this seemingly straightforward basis for beginning demarcations of relevant data (to be kept) and irrelevant data (to be deleted). A result could be extracted.

However, the project was now moving beyond its initial experimental phase. In the airport and train station as the technology moved towards system testing, the computer scientists from University 1 and 2 began to engage with the complexities of relevance detection in real time and real space. They started to look for ways to tidy up the initial steps of object classifcations (which provided approximate shapes for background subtraction) in the airport and train station, through ever more closely cropped pixel masks for objects, with any single, isolated pixels erased and any holes between pixels flled. They suggested masks could be further tidied by removing shadow, just leaving the new entity. And these tidied up entities could now be subjected to object classifcation with what the computer scientists hoped was greater certainty. They were cleaned and tidied objects. Object classifcation would now defne with confdence the objects in view as, for example, human-shaped or luggage-shaped. Cleaning the images, removing shadow, removing gaps in pixel masks was more processor intensive than the initial quick and dirty technique we noted in the earlier experimental phase of the project, but it was still computationally elegant for the computer scientists. It was a reasonably quick technique for ascertaining a classifcation of putative objects and it was a classifcation in which they (and other project participants) could have confdence.

Object classifcation required this more developed form of qualculation, drawing entities together into new relations such that they might be qualifed for judging as relevant or irrelevant because the system faced new challenges in working in real spaces in real time. Classifying something as a human-shaped object in object classifcation still involved algorithmic analysis of video streams in order to draw the parameters (size and shape) of human-shaped or other shaped objects, it still required background subtraction and each object was still identifed through a vector of around 200 features, so each object in itself was complicated. But the airport and train station involved far more cameras than initial experimentation, data in a wider array of formats and framerates, a far greater number of human-shaped and other objects.

Confdence in the system's ability to demarcate relevant from irrelevant data had to remain high as the algorithmic system required further development in order to work in the airport and train station. In particular, object tracking in the airport and train station needed to be attuned to the specifcities of the spaces in which it would work. Object tracking just like our abandoned luggage algorithm had to be able to grasp everyday life. Object tracking was vital for the Route Reconstruction system to work and follow a human-shaped object across multiple cameras, and for the system to know if human-shaped and luggage-shaped objects were moving apart, to know if human-shaped objects were moving the wrong way or into a forbidden space.

Once an object was given a bounding box and metadata had started to be produced on its dimensions, and the speed and direction of the box was noted in its movement across the screen, then object tracking needed to take on the complexities of the train station and airport. The bounding box had to be tracked across one camera's visible range, but also between cameras in the train station and airport where the system searched for other bounding boxes of the same dimensions, relative to camera position, angle and zoom. To know that a human-shaped object on camera 17 was then the same human-shaped object that appeared on camera 18 and was the same human-shaped object that had previously appeared on cameras 11, 7, and 6, required a sophisticated form of tracking. Calculating objects in this way involved what the computer scientists termed Tsai calibrations. These did not operate using pixels alone, but rather by working out the position of an object relative to a camera, its position, angle and zoom, and then counting the number of pixels to fgure out the dimensions of that object in centimetres relative to its distance and angle from a camera. Knowing the size in centimetres of an object in the space of the train station and airport would enable object tracking to happen. But to calculate the size of an object in centimetres (rather than just its size on a screen), the world of the video stream had to be connected to the world of measurement in the space where the camera was located (the airport or train station) and the world of the objects within the video stream had to be connected to the world out there of people, luggage, etc. This was accomplished by measuring the space seen by a camera and then incorporating those measurements into a topological database drawn on by the Event Detection component of the algorithmic system. Eleven conversion coeffcients including angle and zoom of the camera in relation to the world-out-there measurements were now involved in producing an object's size and initiating object tracking.

In this way, demarcating relevant from irrelevant data in the everyday space of the train station and airport, in contrast to the experimental space where all measurements were more or less already fxed and known, required more qualculative work. Judgements had to be made on what might work as a basis for connecting the images on the video stream to the objects that they referenced in the airport or train station. Accurate measurements of the space had to be compiled in a database. And this database had to be combined with the database of popular routes, the metadata on size, speed and direction of bounding boxes, and the algorithmic IF-THEN rules in order to build the sausage of data around an image that we looked at in Chapter 3. Without these efforts, all data except for the single images of luggage once it was abandoned or a human-shaped object moving the wrong way or into a forbidden space would be deemed irrelevant and deleted. Qualculative work to connect the airport and train station space to the video data fowing through the algorithmic system was needed to prepare data for deletion or salvation.

This qualifying work, separating things out, drawing them together into classifcations, working through IF-THEN rules to further qualify whether an image needed to be seen by operatives, was directed towards reducing the amount of video-based data made visible and the amount of data stored and achieving the project's ethical aims. Qualculative work was complex in that it involved detailed efforts to know the everyday space in which the surveillance system operated, build that space into the algorithmic system, and come up with a means to identify and qualify relevant objects. However, this was merely a frst step in the move towards deletion.

Achieving the project's ethical aims required a combination of this notable something—a potentially relevant event from which to issue an alert—and a broad aggregate category of nothing—the irrelevant data to be deleted. This also required that the nothing itself became accountable. What was deleted had to be demonstrably seen to be deleted. In part this involved gathering all the data not seen by operatives along with those clips deemed irrelevant by operatives and deleting that data. However, it also involved retaining the orderly integrity of the accountability process imagined in relation to the initial qualculation process. Deletion needed to follow a similar logic to that of background subtraction and object classifcation which were expected to be appropriately qualifed and made available for accountable judgement.

In this project, to generate accountable certainty, the system was designed to work in the following ways. A secure erase module (SEM) would be built of three sub-modules: a secure erasure scheduler (SES); a secure erase agent (SEEA); and a log generator (SELG). The SES would work with the other system components to retrieve data to be deleted (this would operate using a FIFO queuing system). The SES would send a series of requests for data to the other system components. These requests would include the full path to the fle to be deleted; the start point of deletion (this was based on temporal parameters); and the end point of deletion (using temporal parameters to calculate the fnal block of video data to be erased in each session).

The SEEA would then work on the data to ensure it was overwritten and completely irretrievable from within the system. Overwriting was designed to try and ensure that data could not be retrieved from within the system and provide accountable certainty for its non-status. The project participants hoped that they could demonstrate that overwriting had taken place and that the data had become irretrievable. In place of conventional deletion whereby data access routes would be cut, overwriting became the basis for expunging data from the system (although in practice this turned into something closer to corrupting than expunging the data as expunging proved technically diffcult to automate). The SEEA would then check that deletion was successful by matching the content deleted with that selected by the SES. After deletion, the SELG would then produce a log of data deleted. The log would include the fle names of deleted objects, the time taken to delete and the form of overwriting that had been applied. The SELG would act as the key component for producing accountable certainty of nothing—that the data to be deleted was now deleted—as well as something—the account of nothing.

To make an accountable something from nothing, an external viewer component would parse the log to make it readable by humans and then a human system administrator could audit the log and check it against expectations of how much data should have been deleted (e.g. by comparing how much data had been deleted against how much data passed through the system on average every 24 hours) and whether any traces had been left (of either video streams or metadata relating to, for example, object classifcation or bounding boxes). Events which had been the subject of an alert to operatives would be reviewed manually on a regular basis and then also moved into the SEM for deletion as necessary. The audit log provided a basis for demonstrating within the project that deletion was working. As an internal accountability mechanism it could become a means to see that the algorithm was limited, that further judgements could not be made on the corpus of video-based data that would now be unavailable.

In this sense, accountability (in the form of a data log) ought to provide the means to transform nothing (the deleted) into something (proof of deletion) and to do so in an orderly and certain manner. The log bore the responsibility for accountable action and for achieving the project's ethical aims. However, the results derived from system testing suggested deletion would be anything but straightforward. In tests carried out 'live' in the airport, designed to act as a demonstration of system capabilities for potential users (airport security operatives), video frames and metadata were not gathered in their entirety, orphan frames were left behind on the system, and the reporting tool merely produced a continual accountable output of partial failure. Problems particularly appeared during secure auto-deletion; it was in the moment that data should be corrupted and made irretrievable that some data evaded the system's grasp. The computer scientists involved in the project could get the system to auto-delete the system fles in their entirety by using an insecure deletion protocol (which effectively involved a conventional approach to deletion, changing the routes via which data could be accessed) or by dropping auto-deletion and carrying out a manual corruption process (which might prove more complete but also require more work). The elegant solution of automatic, accountable deletion remained out of reach. This would prove important in efforts to establish the market value of the technology (see Chapter 6), but also somewhat pre-empted the chaotic scenes of demonstration that the entire system began to experience as it moved outside its initial experimental phase (see Chapter 5). Everyday life and the algorithmic system did not see eye to eye.

Work to build the algorithmic deleting machine and constitute an ordered and certain accountable nothing, a notable absence, instead became the basis for establishing a precarious kind of uncertain presence. Orphan frames and the audit log continually generated a disorderly account of something instead of nothing, a blank fgure (Hetherington and Lee 2000) that paid recognition to the terms of its own order (that it should fnd and prove the existence of nothing), but also questioned that order (by fnding orphan frames that then required explanation). The system threatened to overwhelm the qualculations that had tried to establish a demarcation between relevant data to be kept and irrelevant data to be deleted.

The audit log generated a notable question for the project participants: could the technology still be sold primarily on the basis of its technical effcacy in deleting? The clear and negative answer to this question for the coordinators required a signifcant switch in the conditions under which parties might be invited to engage with the system. Initially the project coordinators had sought to take the internal accountability mechanisms of deletion out into the world as a basis for bringing the world to the deleting machine. They sought to develop from nothing, a market-valued something. After these somewhat sketchy results, the project coordinators sought to leave aside the technical diffculties through which nothing (the deleted) failed to be effectively and accountably constituted, at the same time as they continued to embark on concerted market work. As we will see in Chapter 6, having one form of calculation overwhelmed by this blank fgure, encouraged the coordinators to seek a different basis for ordering their calculations.

### Conclusion

In this chapter, we have seen that grasping everyday life and participating in everyday life became more challenging for our algorithms as they moved from experimentation to something closer to system testing in the train station and airport. What might be termed the real world conditions of real time and real space operations proved diffcult. Indeed the algorithmic system needed more development to cope with these new exigencies. Further measurements and a new database were required to build durable links between the space of the airport and train station and the video stream that fowed through the system.

As our algorithms moved from experimentation towards testing in the train station and airport, its calculations changed, the system components were further developed, a more complex and uncertain everyday life needed to be engaged. This was all oriented towards demarcating relevant from irrelevant data in order to delete. Deletion was seen as crucial by the project coordinators to achieving the project's ethical aims and in order to start building a market value for the technology under development. The system, it was hoped, would become the frst choice among frms looking for automated ways to manage their adherence to new data regulations.

Yet deletion could not only happen through demarcation. Decisions had to be taken on the form that deletion would take (changing the route to access data, expunging, overwriting, corrupting data) and the means to accountably demonstrate that deletion had happened. Although decisions were made on all these matters, problems remained. The anticipated nothing—the deletion of irrelevant data—retained a troubling presence in orphan frames that inexplicably escaped the deletion protocols. The anticipated something—a log that accountably demonstrated to audiences that deletion had taken place—was then undermined. In place of a pristine account of nothing (the deleted) was a continual demonstration of the presence of something (the orphan frames): the algorithmic machine had become an expert in accountably demonstrating its own failures. This disruptive blank fgure, always attentive to the order in which it was expected to work, was simultaneously managing to challenge that order, by placing signifcant questions next to the algorithmic system's future viability. As we will see in Chapters 5 and 6, these questions only become more pronounced over time.

In sum, we have seen in this chapter that doing deletion can be a form of active qualculative work. The members of the project team dedicated hours and effort to build a machine to algorithmically delete. The technical work was also preparatory market work and accountability work. It involved coordination, computer science, social science, the invocation of end-user needs, and different ways to understand a developing policy environment. Doing this work was neither singular nor straightforward, but involved somehow making something from this diverse array. And making something required qualculations to separate out and identify objects, then bring those objects together in object classifcations in order to be judged. Yet setting limits for our algorithmic system through deletion was not straightforward; for something to be convincingly limited, it needed to be demonstrably and accountably limited. The work to produce an accountable deleting machine was focused on producing a machine that could account for itself and the way it set limits, demonstrating nothing (the product of deletion) as a prior step to something (the account of nothing, building a world of relations of value into the technology). However, accountability work was also uncertain and a little precarious with the world of relations of people and things assembled to do accountability, shifting between certainty and uncertainty. The study of making deleting accountable, emphasised this precariousness to prove that nothing existed as a result of something being deleted, without resurrecting the thing deleted, proved an ongoing conceptual and practical challenge. As we will see in Chapter 5, this was only the start of a series of challenges for our algorithms.

### References


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# Demonstrating the Algorithm

**Abstract** This chapter explores the problems involved in demonstrating an algorithmic system to a variety of audiences. As the project reached its fnal deadlines and put on demonstrations of the technology-underdevelopment to various audiences—including the project funders—it became ever more apparent that in a number of ways promises made to key audiences, may not be met. In project meetings, it became rapidly apparent that a number of ways of constituting a response to different audiences and their imagined demands could be offered. To manage this problem, the chapter shows that a range of different more or less 'genuine' demonstrations with greater or lesser integrity were discursively assembled by the project team, and ways to locate and populate, witness and manage the assessment of these demonstrations were brought to the table. The notion of integrity is used to incorporate sight, materiality and morality into the growing literature on algorithms.

**Keywords** Demonstration · Expectation · Integrity · Morality · Witnessing

# Opening

In this chapter, our algorithms will continue their journey into the everyday. Beyond the initial expectation held by project participants (see Chapters 2 and 3) that in experimental settings the algorithms might prove their ability to grasp features of everyday life, the algorithms must now through testing and demonstration, to express their ability to become the everyday. However, building on the portents of Chapter 4, wherein the deletion system ran into trouble with orphan frames, what we will see in this chapter is a broader and more calamitous collapse of the relation between the algorithms and the everyday. As the project team reached closer to its fnal deadlines and faced up to the task of putting on demonstrations of the technology under development to various audiences—including the project funders—it became ever more apparent that the algorithms struggled to grasp the everyday, struggled to compose accounts of the everyday and would struggle to become the everyday of the train station and airport. Promises made to funders, to academics, to potential end-users, to ethical experts brought into assess the technology, may not be met. It became clear to the project team that a number of ways of constituting a response to different audiences and their imagined demands would need to be offered. This did not involve a simple binary divide between the algorithmic system working and not working. Instead a range of different demonstrations, with what we will describe (below) as greater or lesser integrity, were discursively assembled by the project team, ways to locate and populate, witness and manage the assessment of demonstrations were brought to the table. Several of the demonstrations that had already been carried out were now reconceptualised as showing that features of the algorithmic system could perhaps work. Agreements were rapidly made, tasks were distributed and imminent deadlines agreed; the demonstrations (with varying integrity) were only weeks away.

The growing science and technology studies (STS) literature on demonstrations hints at a number of ways in which the integrity of demonstrations might be engaged. For example, Smith (2004) suggests demonstrations may involve elements of misrepresentation or partial fabrication. Coopmans (2010) analyses moves to conceal and reveal in demonstrations of digital mammography which manage the activity of seeing. And Simakova (2010) relates tales from the feld of demonstration in technology launches, where the absence of the technology to be launched/demonstrated is carefully managed. These feature as part of a broader set of concerns in the STS demonstration literature including notions of witnessing (Smith 2009), dramaturgical metaphors and staging (Suchman 2011), and questions regarding who is in a position to see what at the moment of visual display (Collins 1988). These studies each have a potential relevance for understanding the demonstrations in this chapter.

The chapter will begin with a discussion of the ways in which recent STS literature has handled future orientations in studies of technology demonstrations, testing, expectations and prototyping. This will provide some analytic tools for considering the work of the algorithmic system in its move into forms of testing and demonstration. I will then suggest that notions of integrity provide a means to turn attention towards the practices of seeing, forms of morality and materiality made at stake in demonstrations of our algorithms. The chapter will conclude with a discussion of the problems now faced by our algorithms as a result of their demonstrable challenges.

### Future Orientations of Technology

STS scholars have provided several ways of engaging with potential futures of technology, expectations of science and technology, technology under development and/or technologies that continue to raise concerns regarding their future direction, development, use or consequence. Drawing these studies together, three analytic themes emerge which I will use as a starting point for discussion. First, studies of technology demonstration, testing, displays, launches, experiments, and the management of expectations frequently incorporate considerations regarding what to show and what to hide. Coopmans (2010) terms this the management of 'revelation and concealment' (2010: 155). Collins (1988) suggests that what might nominally be presented as a public experiment (in his case in the strength and integrity of fasks designed to carry nuclear waste) is more akin to a display of virtuosity (727). In order to manage and maintain such virtuosity, only partial access is provided to the preparation of the 'experiment', rendering invisible: 'the judgements and glosses, the failed rehearsals, the work of science – that provide the normal levers for criticism of disputed experimental results' (728). Through this process of revelation (the public display) and concealment (hiding the preparation and practice), 'the particular is seen as the general' (728). That is, a fask containing nuclear waste is not presented as simply surviving this particular train wreck, but a display is put on through which we can see that all future trouble that this fask, and other similar fasks, might face will be unproblematic. Through historical studies of scientifc display and demonstration (Shapin 1988, cited by Collins 1988; Shapin and Shaffer 1985) we can start to see the long-standing import of this movement between revelation and concealment for the continued production and promotion of felds of scientifc endeavour. Through contemporary studies of technology demonstration, sales pitches and product launches (Coopmans 2010; Simakova 2010), we can note the putative market value of concealment and revelation.

Second, technology demonstrations and the management of future technological expectations do not only involve a continual movement between revelation and concealment, but also a continual temporal oscillation. Future times, places and actions are made apparent in the here and now (Brown 2003; Brown and Michael 2003). For example, future concerns of safety, reliability, longevity and even ethics (Lucivero et al. 2011) are made demonstrably present in the current technology. Furthermore, work to prepare a prototype can act as a sociomaterial mediator of different times and work orientations, 'an exploratory technology designed to effect alignment between the multiple interests and working practices of technology research and development, and sites of technologies-in-use' (Suchman et al. 2002: 163). For example, the possibilities of future mundane technology support and supplies are made manifest as features of demonstrations and launches (Simakova 2010). Alternatively, the limitations of a technology as it is now, can be made clear as part of the work of emphasising the future developmental trajectory of a technology or as a feature of attesting to the professionalism and honesty of the organisation doing the demonstration (that the organisation can and already has noted potential problems-to-be-resolved; Smith 2009).

Third, within these studies there is an emphasis on the importance of audiences as witness. Drawing on Wittgenstein, Pinch (1993) suggests that audiences have to be persuaded of the close similarity between the demonstration and the future reality of a technology, they have to be persuaded to place in abeyance all the things that might make for a possible difference and instead agree to select the demonstrator's criteria as the basis for judging sameness. However, this is not simply a case of the audience being dupes to the wily demonstrator. Smith (2004) contends the audience, the potential customer, can be knowledgeable of the limits of a technology, seeking to gain in some way from their participation in the demonstration and/or willing to 'suspend disbelief' in the artifce of the presentation (see also Coopmans [2010] on knowing audience members and their differential reaction). Through what means might audience members make their conclusions about a demonstration? Suchman (2011), in studying encounters with robots, looks at how persuasion occurs through the staging and witnessing that is characteristic of these scenes. Audiences, Suchman suggests, are captured by the story and its telling. Drawing on Haraway's modest witness, Suchman outlines how the audience are positioned within, rather than outside the story; they are a part of the world that will come to be. Pollock and Williams (2010) provide a similar argument by looking at the indexicality of demonstrations which, to have infuence, must create the context/ world to which they point. Developing this kind of analytical position, Coopmans (2010) argues that audiences are integrated into the complex management of seeing and showing. Audiences are classifed and selectively invited to identify with a particular future being shown through the demonstration and attached to the technology being demonstrated. 'Efforts to position the technological object so as to make it "seeable" in certain ways are mirrored by efforts to confgure an audience of witnesses' (2010: 156).

Smith's (2004) utilisation of the dramaturgical metaphor for technology demonstrations, suggests these three focal points, of concealment and revelation, temporal oscillation and witnessing, are entangled in particular ways in any moment of demonstration. I will suggest that these three themes are also prevalent in preparing our algorithms for demonstrable success. But, frst, I propose a detour through integrity as a basis for foregrounding the subsequent analysis of our algorithms and for developing these ideas on demonstration.

### Integrity and the Algorithm

Elements of the technology demonstration literature already appear to lend themselves to an analysis of integrity in, for example, studies of partial fabrication, revelation and concealment. However, it is in the work of Clark and Pinch (1992) on the 'mock auction con' that we fnd a rich and detailed study of a type of demonstration tied to questions of integrity. The central point of interest for us in this study is that those running the mock auction con build a routine and: 'The various repetitive elements of the routine… provide local-historical precedents for understanding what occurs and for the audience determining what (apparently) is likely to follow' (Clark and Pinch 1992: 169). In this way, the audience to the mock auction are convinced into bidding for 'bargain' priced goods that are not what they appear through allusions to the origin of those goods (that perhaps they were stolen, must be sold quickly and so on). Being able to point to a context which seems available—but is not—and could provide a reasonable account for buyers to make sense of the 'bargains' which seem available—but are not—is central to managing the con (getting people to pay over the odds for low-quality goods).

We notice similar themes of things appearing to be what they are not emerging in popular media stories of fakes, whether focused on an individual person or object (such as a fake death,1 fake stamp,2 or fake sportsperson,3 a fake bomb detector4 or a fake doctor accounting for more5 or fewer6 deaths) or a collective fake (where the number of fake documents,7 fossils,8 or the amount of money9 claimed to be fake, takes centre stage).10 In each case, the success of the fake in not being discovered for a time depends on a demonstrative ability to point towards a context which can successfully account for the claimed attributes of the person or object in focus. This is what Garfnkel (1963, 1967) would term the relation of undoubted correspondence between what something appears to be and what it is made to be through successive turns in interaction. We pay what turns out to be an amount that is over the odds for an item in the mock auction con, but what constitutes an amount that is over the odds is a later revelation. At the time of purchase, we have done no more than follow the ordinary routine of paying money for an item. We have done no more than trust the relation of undoubted correspondence. I would like to suggest that this kind of context work where we manage to index or point towards a sense of the scene that enables the relation of undoubted correspondence to hold, can be addressed in terms of integrity. A dictionary defnition of integrity suggests: '1. the quality of having strong moral principles. 2. the state of being whole'.11 Thus context work in situations of fakes or cons might be understood as directed towards demonstrating the moral and material integrity required for the relation of undoubted correspondence to hold (where what is required is a feature established within the setting where the interactions take place).

We can explore this notion of integrity further in the most developed feld of fakery: fake art. Research in this area (see, for example, Alder et al. 2011) explores famous fakers12 and the shifting attribution of artworks to artists.13 The integrity of artworks in these situations appears to depend on work to establish that a painting is able to demonstrate material properties that support a claim to be genuine.14 In order to convincingly account for the integral 'whole' of the painting, material properties are articulated in such a way as to indexically15 point the artwork towards a context (of previous 'sales', auction catalogues which defnitively describe 'this' artwork as attributed to a particular artist, dust which clearly demonstrates its age). We might note that such indexing is crucial to constituting the context. However, the sometimes arduous efforts to accomplish a context must be split and inverted (Latour and Woolgar 1979) in such a way that the artwork appears to effortlessly point towards 'its' context in a way that suggests this context has always been tied to this artwork, enabling the artwork to seem to be what it is. The work to actively construct a context must disappear from view in order for an artwork to effortlessly index 'its' context and attest to the 'whole' *material* integrity of the artwork; that it has the necessary age and history of value, ownership and exchange to be the artwork that it is. Furthermore, artworks need to be *seen* for what they are, with for example, brushstrokes becoming a focal point for audiences of expert witnesses to attest that in the brushstrokes, they can see the style of a particular artist,16 with such witnesses then held in place as supporters of the see-able integrity of the artwork.17 Declarations of the nature of an artwork (its material and visual integrity), also appear to be *morally* oriented such that constituting the nature of a painting as correctly attributed to an artist, becomes a means to constitute the moral integrity of: the material properties and practices of seeing that have established the painting as what it is (as genuine or as fake and thereby morally corrupt); its human supporters as what they are (as, for example, neutral art experts or morally dubious individuals who may be seeking fnancial gain from a painting's material and visual integrity). The material, visual, moral question of integrity becomes: can the object to hand demonstrate the properties for which it ought to be able to account, indexically pointing towards a context for establishing the integrity of the material properties of the artwork and the practices through which it has been seen by its supporters? Can it maintain a relation of undoubted correspondence between what it appears to be and what it interactionally becomes?

In a similar manner to technology demonstrations, fakes appear to incorporate a concern for revelation and concealment (revealing a husband's method of suicide, concealing the fact he is still alive), temporal oscillation (authorities in buying a fake bomb detector, also buy a future into the present, imagined and indexically created through the technology's apparent capabilities) and the careful selection and positioning of audience within the narrative structure being deployed (particularly when faking a marriage or other notable social ceremony). However, fakes (particularly fake artworks) alert us to the possibility of also considering questions of visual, material and moral integrity in forms of demonstration. Returning to our algorithms will allow us to explore these questions of integrity in greater detail.

### Demonstrating Algorithms

From their initial discussions of system architecture and experimentation with grasping the human-shaped object (see Chapter 2), to the start of system testing, demarcating relevant from irrelevant data and building a deleting machine (see Chapter 4), the project team had retained a confdence in the project's premise. The aim was to develop an algorithmic surveillance system for use, initially, in a train station and airport that would sift through streams of digital video data and select out relevant images for human operatives. As I suggested in Chapter 2, the idea was to accomplish three ethical aims, to reduce the amount of visual video data that was seen by operatives, to reduce the amount of data that was stored by deleting irrelevant data and to not develop any new algorithms in the process. Up until the problems experienced with the deletion system (see Chapter 4), achieving these aims had been a diffcult and challenging task, but one in which the project team had mostly succeeded. Yet the project had never been just about the team's own success: the project and in particular the algorithmic system needed to demonstrate its success (and even become a marketable good, see Chapter 6).

From the project proposal onwards, a commitment had always been present to put on three types of demonstration for three distinct kinds of audience. As the person responsible for ethics in the project, I would run a series of demonstrations for ethical experts, policy makers (mostly in the feld of data protection) and academics who would be called upon to hold to account the ethical proposals made by the project. End-users from the train station and airport would also be given demonstrations of the technology as an opportunity to assess what they considered to be the potential strengths and weaknesses of the system. Finally, the project funders would be given a demonstration of the technology 'live' in the airport at the end of the project, as an explicit opportunity to assess the merits, achievements, failures and future research that might emanate from the project. We will take each of these forms of demonstration in turn and look at the ways in which our algorithms now engage with the everyday and the questions of integrity these engagements provoke.

#### *Demonstrating Ethics*

I invited a group of ethical experts (including academics, data protection offcers, politicians and civil liberty organisations) to take part in a demonstration of the technology and also ran sponsored sessions at three conferences where academics could be invited along to demonstrations. The nature of these demonstrations at the time seemed partial (Strathern 2004), and in some ways deferred and delegated (Rappert 2001) the responsibility for ethical questions from me to the demonstration audiences. The demonstrations were partial in the sense that I could not use live footage as these events did not take place in the end-user sites and could only use footage of project participants acting out suspicious behaviour due to data protection concerns that would arise if footage were used of non-project participants (e.g. airport passengers) who had not consented to take part in the demonstrations. Using recorded footage at this point seemed more like a compromise than an issue of integrity; footage could be played to audiences of the User Interface and our algorithms selecting out human-shaped objects, action states (such as abandoned luggage) and even use footage of the Route Reconstruction system to replay those objects deemed responsible for the events. Audience members were invited to discuss the ethical advantages and disadvantages they perceived in the footage. If it raised questions of integrity to any extent, it was perhaps in the use of recorded footage. But audiences were made aware of the recorded nature of the footage and the project participants' roles as actors. In place of a display of virtuosity (Collins 1988) or an attempt to manage revelation and concealment (Coopmans 2010) I (somewhat naively it turned out) aimed to put on demonstrations as moments where audiences could raise questions of the technology, free from a dedicated move by any wily demonstrator to manage their experience of seeing.

Along with recorded footage, the audience were shown recordings of system responses; videos incorporated the technicalities of the Event Detection component of the system architecture, its selection procedures and provision of alerts. I took audiences through the ways in which the system put bounding boxes around relevant human-shaped and other objects deemed responsible for an action and showed a few seconds of footage leading up to and following an alert. At this moment, I thought I was giving audiences a genuine recording of the system at work for them to discuss. However, it later transpired that the recorded footage and system response, and my attestation that these were more or less realistic representations of system capabilities, each spoke of an integrity belied by later demonstrations.

#### *End-User Demonstrations*

The limitations of these initial demonstrations became clear during a second form of demonstration, to surveillance operatives in the airport. Several members of the project team had assembled in an offce in the airport in order to give operatives an opportunity to see the more developed version of the technology in action. Unlike initial discussions around the system architecture or initial experimentation with grasping the humanshaped object (see Chapter 2), our algorithms were now expected to deliver a full range of competences in real time and real space.18 These demonstrations also provided an opportunity for operatives to raise issues regarding the system's latest design (the User Interface, for example, had been changed somewhat), its strengths and limitations, and to ask any questions. This was to be the frst 'live' demonstration of the technology using a live feed from the airport's surveillance system. Although Simakova (2010) talks of the careful preparations necessary for launching a new technology into the world and various scholars cite the importance of revelation and concealment to moments of demonstration (Smith 2009; Coopmans 2010; Collins 1988), this attempt at a 'demonstration' to end-users came to seem confdent, bordering on reckless in its apparent disregard of care and concealment. Furthermore, although there was little opportunity to select the audience for the test (it was made up from operatives who were available and their manager), there was also little done to position the audience, manage their experience of seeing, incorporate them into a compelling narrative or perform any temporal oscillation (between the technology now and how it might be in the future; Suchman 2011; Brown 2003; Brown and Michael 2003; Simakova 2010; Smith 2009). The users remained as unconfgured witnesses (Coopmans 2010; Woolgar 1991).

Prior to the demonstration to end-users, the limited preparatory work of the project team had focused on compiling a set of metrics to be used for comparing the new algorithmic system with the existing conventional video-based surveillance system. An idea shared among the computer scientists in the project was that end-users could raise questions regarding the technology during a demonstration, but also be given the metric results as indicative of its effectiveness in aiding detection of suspicious events. The algorithmic and the conventional surveillance system would operate within the same temporal and spatial location of the airport and the operatives would be offered the demonstrators' metric criteria as the basis for judging sameness (Pinch 1993). The metrics would show that the new technology, with its move to limit visibility and storage, was still at least as effective as the current system in detecting events, but with added ethics.

This demonstration was designed to work as follows. The operatives of the conventional surveillance system suggested that over a 6 hour period, approximately 6 suspicious items that might turn out to be lost or abandoned luggage, would be fagged by the operatives and sent to security operatives on the ground for further scrutiny. On this basis, our abandoned luggage algorithm and its IF-THEN rules (see Introduction and Chapter 2) needed to perform at least to this level for the comparative measure to do its work and demonstrate that the future would be as effective as the present, but with added ethics. The system was set to run for 6 hour prior to the arrival in the offce of the surveillance operatives so they could be given the results of the comparative metric. I had also taken an interest in these comparative metrics. I wanted to know how the effectiveness of our algorithms could be made calculable, what kinds of devices this might involve, how entities like false positives (seeing things that were not there) and false negatives (not seeing things that were there) might be constituted. I wanted to relay these results to the ethical experts who had taken part in the previous demonstrations on the basis that a clear division between technical effcacy and ethical achievement was not possible (see Chapter 3 for more on ethics). If the system worked or did not on this criteria, would provide a further basis for ethical scrutiny.

In the 6 hour that the system ran, when the conventional surveillance system would detect 6 items of potentially lost or abandoned luggage, the algorithmic system detected 2654 potentially suspicious items. This result went so far off the scale of predicted events, that the accuracy of the system could not even be measured. That is, there were just too many alerts for anyone to go through and check the number of false positives. The working assumption of the computer scientists was that there were likely to be around 2648 incorrect classifcations of human-shaped and luggage-shaped objects that had for a time stayed together and then separated. In later checking of a random sample of alerts, it turned out the system was detecting as abandoned luggage such things as refective surfaces, sections of wall, a couple embracing and a person studying a departure board. Some of these were not fxed attributes of the airport and so did not feature in the digital maps that were used for background subtraction. However, object parameterisation should have been able to calculate that these were not luggage-shaped objects, and the fooring and walls should have been considered fxed attributes.

However, in the immediate situation of the demonstration, there was not even time for this random sampling and its hastily developed explanations—these all came later. The airport surveillance operatives turned up just as the 2654 results were gathered together and the project team had to meekly hand these results to the operatives' manager as evidence of system (in)effcacy.

The results of these tests also highlighted the limitations of my initial ethical demonstrations (described previously). The 'recorded' footage of the system in operation that I had (apparently) simply replayed to audiences, began to seem distinctly at odds with the results from the live testing. What was the nature of the videos that I had been showing in these demonstrations? On further discussion with the computer scientists in the project, it turned out that system accuracy could be managed to the extent that the parameters of the footage feeding into the system could be controlled. For example, the computer scientists had worked out that a frame rate of 15 frames per second was ideal for providing enough detail without overloading the system with irrelevant footage. This frame rate enabled the system to work elegantly (see Chapter 2); using just enough processing power to produce results, in real time. They also suggested that certain types of camera location (particularly those with a reasonably high camera angle, no shiny foors and consistent lighting) led to better results for the system. And the conditions of flming were also a pertinent matter; crowds of people, sunshine and too much luggage might confuse the system. As we can see in the following images (Figs. 5.1, 5.2, and 5.3), the system often became 'confounded' (to use a term from the computer scientists).

Collins (1988) and Coopmans (2010) suggest that central to demonstrations are questions of who is in a position to see what. However, the demonstrations considered here suggest that seeing is not straightforwardly a matter of what is revealed and what is concealed. In the development and demonstration of the algorithmic system, the straightforward division is made more complex between the seeing demonstrator and the audience whose vision is heavily managed. As a researcher and demonstrator, I was continually developing my vision of the algorithms and, in different ways, the end-users as audience were presented with stark (in)effcacy data to help shape how they might see the algorithms. The computer scientists also had a developing understanding of algorithmic vision (learning more about the precise ways that the system could not straightforwardly see different foor coverings or lighting **Fig. 5.1** A humanshaped object and luggage-shaped object incorrectly aggregated as luggage

conditions or manage different frame rates across different cameras). And some features of how our algorithms grasped everyday life were never resolved in the project. In the following image (Fig. 5.4), the algorithm has selected out a feature of the fxed attributes of the airport (a wall) as a luggage-shaped object, something that ought to be impossible using background subtraction as the wall ought to be part of the background map:

Further, those involved in seeing in these demonstrations needs to be extended to incorporate our algorithms too. In the ethical demonstrations, to reveal to the audience but not the algorithm, that the data was recorded involved some integrity (those invited to hold the **Fig. 5.3** A humanshaped object's head that has been incorrectly classifed as a human in its own right, measured by the system as small and therefore in the distance and hence in a forbidden area, set up for the demonstration

**Fig. 5.4** Wall as a luggage-shaped object

technology to account were at least apparently informed of the nature of the data being used and if the recorded nature of the data was concealed from the algorithm, then the demonstration could be presented as suffciently similar to using live data to maintain its integrity). However, following the disappointing results of the user demonstration and further discussions with the computer scientists regarding the recorded data used in the ethical demonstrations, it transpired that the algorithms were not entirely in the dark about the nature of the footage. The computer scientists had a developing awareness that the algorithms could see a space with greater or lesser confdence according to camera angles, lights, the material foor covering, how busy a space happened to be and so on. Using recorded data that only included 'unproblematic' footage enabled the algorithms to have the best chance of seeing the space and to be recorded seeing that space successfully. To replay these recordings as the same as live data, was to conceal the partially seeing algorithm (the algorithm that sees well in certain controlled conditions). Algorithmic vision (how the algorithm goes about seeing everyday life) and the constitution of the spaces in which the algorithms operate (including how the algorithms compose the nature of people and things) were entangled with questions of material, visual and moral integrity which we will return to below. However, frst and most pressing for the project team was the question of what to do about demonstrating the ethical surveillance system to project funders given the disastrous effcacy results.

#### *Demonstration for Project Funders*

A meeting was called among project participants following the end-user demonstration. The dominant theme of the discussion was what to do about the rapidly approaching demonstration to project funders given the results of the end-user demonstrations. This discussion was made particularly tense when one of the computer scientists pointed out that in the original project description, a promise had been made to do a demonstration to the project funders not only of the airport, but also of the other end-user location—the train station. Much of the discussion during the meeting was of the technical challenges that were becoming apparent of digitally mapping the fxed attributes of a space as complex as an airport in order for the algorithms to classify objects as humanshaped or not. And the further complexities of then mapping a train station too, of how both locations had camera angles not favoured by the algorithms (e.g. being too low), were both subject to changing lighting conditions and frame rates, multiple fooring material and were both busy with people and objects.

The following excerpts have been produced from feldnotes taken during the meeting. The frst option presented during the meeting was to use recorded data:

*Computer Scientist1*: it doesn't work well enough. We should use recorded data. [no response]

The silence that followed the computer scientist's suggestion was typical of what seemed to be multiple awkward pauses during the meeting. One reason for this might have been an ongoing difference among members of the project team as to how responsibility ought to be distributed for the disappointing end-user demonstration results. Another reason might also be a concern that to use recorded data was to effectively undermine the integrity of the fnal project demonstration. The computer scientist went on to make a further suggestion to the project coordinator:

*Computer Scientist1*: do you want to tell the truth? [no response]

The pause in the meeting following this second suggestion was slightly shorter than the frst and was breached by the project coordinator who began to set out a fairly detailed response to the situation, giving the impression that he had been gathering his thoughts for some time. In his view a live test in the airport, using live video streams was the only possibility for the demonstration to funders. For the train station, his view was different:

*Project Coordinator*: We should record an idealised version of the system, using recorded data. We can just tell the reviewers there's not enough time to switch [confgurations from airport to train station]. What we are saying matches the [original project description]. We will say that a huge integration was required to get two installations.

In this excerpt the project coordinator suggests that for the train station, not only will recorded footage be used, but the demonstration will be 'idealised'. That is, a segment of recorded data will be used that fts computer scientists' expectations of what the algorithms are most likely to correctly see and correctly respond to (where 'correct' in both cases would be in line with the expectations of the project team). Idealising the demonstration is closer to a laboratory experiment than the initial system experimentation we saw in Chapter 2. It involved controlling conditions in such a way as to extend the clean and pure, controlled boundaries of the laboratory into the everyday life of the train station (drawing parallels with Muniesa and Callon's (2007) approach to the economist's laboratory) to manage a display of virtuosity (Collins 1988). This is the frst way in which questions of integrity were opened: only footage from the best-positioned cameras, featuring people and things on one kind of foor surface, in one lighting condition, at times when the station was not busy, would be used. However, there was also a second question of integrity at stake here: the demonstration would also feature recorded system responses. This meant that the computer scientists could keep recording responses the system made—how our algorithms went about showing they had seen, grasped, classifed and responded to everyday life correctly—until the computer scientists had a series of system responses that matched what the computer scientists expected the algorithms to see and show. Any 'errors' by the algorithms could be removed from the recording.

At this moment, several meeting participants looked as if they wanted to offer a response. However, the project coordinator cut off any further discussion:

*Project Coordinator*: I don't think there's any need to say anything on any subject that was not what I just said.

The immediate practical outcome of this meeting was to distribute tasks for the idealised, recorded train station demonstration (project members from StateTrack, the train operator, were to start recording video streams and provide computer scientists with more detail on their surveillance camera layouts, computer scientists were to start fguring out which cameras to use in the recordings, and so on). The distribution of tasks was seemingly swift and effcient and unlike the initial sections of the meeting which were characterised by what appeared to be awkward pauses. For the train station demonstration, revelation and concealment (Coopmans 2010) would be carefully managed, through the positioning of witnesses (Smith 2009). The ethical future to be brought into being would be staged with a naturalistic certainty—as if the images were just those that one would see on entering the train station, rather than a narrow selection of images from certain cameras, at certain angles, at certain times, of certain people and certain objects.

However, this focus on an idealised, recorded demonstration for the train station, left the demonstration for the airport under-specifed, aside from needing to be 'live'. Two follow-up meetings were held in the airport to ascertain how a 'live' demonstration of the technology could be given to the project funders. Allowing the algorithms to run on their own and pick out events as they occurred in the airport continued to provide disappointing results. The project coordinator maintained the need for a 'live' demonstration and in particular wanted to put on a live demonstration of the system detecting abandoned luggage, describing this as the 'king' of Event Detection (on the basis that it was perceived by the computer scientists and funders as the most complex event type to detect). In a second airport meeting, a month before the fnal demonstration, the project team and particularly the project coordinator became more concerned that the algorithms would not work 'live'. In response to these problems, the project team began to move towards idealising the 'live' demonstration as a means to increase the chance that the algorithms would successfully pick out abandoned luggage. To begin with the airport operators and computer scientists discussed times when the airport would be quietest, on the basis that the number of people passing between a camera and an item of abandoned luggage might confuse the algorithm:

*Computer Scientist2*: Do we need to test the busy period, or quiet time like now?

*Project Coordinator*: Now I think is good.

*Computer Scientist1*: We need to fnd the best time to test… it cannot be too busy. We need to avoid the busy period because of crowding.

Once the ideal timing for a demonstration had been established (late morning or early afternoon, avoiding the early morning, lunchtime or early evening busy periods where multiple fights arrived and departed), other areas of activity that could be idealised were quickly drawn into discussion. It had become apparent in testing the technology that an item of abandoned luggage was identifed by airport staff using the conventional surveillance system on average once an hour. To ensure that an item of luggage was 'abandoned' in the quiet period would require that someone known to the project (e.g. an airport employee in plain clothes) 'abandoned' an item of luggage. However, if the luggage was to be abandoned by someone known to the project, this opened up further opportunities for idealising the 'live' demonstration:

*Project Coordinator*: Is there a set of luggage which will prove better? *Computer Scientist1*: In general some more colourful will be better.

The computer scientist explained that the background subtraction method for Event Detection might work more effectively if objects were in strong contrast to the background (note Fig. 5.3, where the system seems to have 'lost' the body of the human-shaped object as it does not contrast with the background and focused on the head as a humanshaped object in its own right). The system could not detect colour as such (it did not recognise yellow, green, brown, etc.), but the computer scientist reasoned that a very colourful bag would stand in contrast to any airport wall, even in shadow, and so might be more straightforward for the algorithms to classify:

*Computer Scientist2*: We could wrap it [the luggage] in the orange [high visibility vest].

*Project Coordinator*: Not for the fnal review, that will be suspicious.

*Computer Scientist2*: We could use that [pointing at the yellow recycle bin].

*Computer Scientist1*: That is the right size I think, from a distance it will look practically like luggage. We will check detection accuracy with colour … of the luggage. Maybe black is worst, or worse than others, we would like to check with a different colour. We should test the hypothesis.

*Computer Scientist2*: What if we wrap the luggage in this [yellow printed paper].

*Computer Scientist1*: I think yes.

*Computer Scientist2*: Would you like to experiment with both bags?

*Computer Scientist1*: Yes, we can check the hypothesis.

For the next run through of the test, one of the project team members' luggage was wrapped in paper to test the hypothesis that this would increase the likelihood of the object being detected by the algorithm (Fig. 5.5).

The hypothesis proved to be incorrect as the results for both items of luggage were broadly similar and continued to be disappointing. However, it seemed that the algorithms could always successfully accomplish background subtraction, classify objects as human-shaped and luggage-shaped and create an alert based on their action state as separate for a certain time and over a certain distance in one, very tightly delineated location in the airport. Here the IF-THEN rules of the algorithm seemed to work. The location provided a further basis to idealise the 'live' demonstration, except that the person 'abandoning' the luggage had to be very precise. In initial tests the computer scientists and the person dropping the luggage had to remain on their phones, precisely coordinating and adjusting where the luggage should be positioned. It seemed likely that a lengthy phone conversation in the middle of a demonstration and continual adjustment of the position of luggage would be noticed by project funders. The project team discussed alternatives to telephone directions:

*Project Coordinator*: We should make a list of exact points where it works perfectly, I can go with a marker and mark them.

*Computer Scientist1*: Like Xs, X marks the spot.


**Fig. 5.5** Luggage is idealised

After two days of rehearsal, the project coordinator was satisfed that the airport employee was leaving the luggage in the precisely defned location on a consistent basis, that the luggage selected was appropriate, that it was being left in a natural way (its position was not continually adjusted following telephone instructions) and the algorithm was successfully classifying the luggage-shaped object and issuing an alert that funders would be able to see in the demonstration.

At this moment it appeared that the demonstration would be 'live' and 'idealised', but what of its integrity? I was still present to report on the ethics of the technology under development and the project itself. In the fnal preparation meeting prior to the demonstration for research funders, I suggested that a common motif of contemporary ethics was accountability and transparency (Neyland 2007; Neyland and Simakova 2009; also see Chapter 3) and this sat awkwardly with the proposed revelation and concealment and positioning of witnesses being proposed. On the whole, the project team supported the idea of making the demonstration more accountable and transparent—this was, after all, a research project. The project team collectively decided that the demonstration would go ahead, but the research funders would be told of the actor's status as an employee of the airport, that the abandonment itself was staged, that instructions would be given to the actor in plain sight of the funders. Revelation and concealment were re-balanced and perhaps a degree of integrity was accomplished.

### Integrity, Everyday Life and the Algorithm

In this chapter, the complexity of the everyday life of our algorithms appeared to escalate. Moving from initial experimentation, in which the aim was to grasp the human-shaped and other shaped objects, towards testing and demonstrations in which the everyday life of the airport and train station had to be accounted for, proved challenging. Building on the partial failures of the deleting machine in Chapter 4 that pointed towards emerging problems with the system, here demonstrations for end-users of the full system highlighted signifcant problems. But these were only one of three types of demonstration (to generate ethical discussion, for end-user operatives and for project funders).

The complexities of these demonstrations can be analysed through the three themes we initially marked out in the STS literature on future orientations of technology. Each of the forms of demonstration intersects these themes in distinct ways. For example, the demonstrations for ethical audiences were initially conceived as free from many of the concerns of revelation and concealment, temporal oscillation and carefully scripted witnessing. I had (naively) imagined these demonstrations were occasions in which the technology would be demonstrated in an open manner, inspiring free discussion of its potential ethical implications. Yet the demonstration to end-users and prior attempt to collect effcacy data to render the algorithmic system comparable with the conventional surveillance system (but with added ethics), revealed the extent of concealment, temporal oscillation and carefully scripted witnessing that had been required to put together the videos of the system for the ethical demonstrations. I could now see these videos as demonstrably accounting for an algorithmic technology with capabilities far beyond those displayed to end-users. We could characterise the ethical demonstration as a kind of idealised display of virtuosity (Collins 1988), but one which no project member had confdence in, following the search for effcacy data for end-users.

Subsequent discussions of the form and content of the demonstrations for project funders suggests that a compromise on integrity was required. The project coordinator looked to carefully manage revelation and concealment (for the train station only using recorded footage, within conditions that algorithms could see, only using recorded system responses and only using those responses when the system had responded correctly; or in the airport controlling the type of luggage, its location, its careful 'abandonment'), temporal oscillation (using the footage to conjure an ethical surveillance future to be made available now) and the elaboration of a world into which witnesses could be scripted (with the computer scientists, project manager, algorithms and myself initially in a different position from which to see the world being offered to project funders).

Yet discussion of demonstrations and their integrity should not lead us to conclude that this is simply and only a matter of deception. Attending to the distinct features of integrity through notions of morality, materiality and vision can help us to explore what kind of everyday life our algorithms were now entering into. Firstly, our algorithms have been consistently oriented towards three ethical aims (to see less and address privacy concerns, store less and address surveillance concerns, and only use existing algorithms as a means to address concerns regarding the expansion of algorithmic surveillance). Articulating the aims in ethical demonstrations constituted the grounds for a form of material-moral integrity—that the likelihood of a specifc materially mediated future emerging through the algorithmic system could be adjudged through the demonstrations. The demonstrations thus involved bringing a material-moral world into being by clearly indexing the certainty and achievability of that world, creating a relation of undoubted correspondence between the everyday life of the demonstration and the future everyday that it pointed towards. Ethical experts were drawn into this process of bringing the world into being so that they might attest to the strength, veracity and reliability of the demonstrated world to which they had been witness. In demonstrations to funders, the latter were also inscribed into the material-moral world being indexed so that they might attest to the project funding being money well spent. The pressure towards moral-material integrity is evidenced through: the project's own ethical claims positioning the tasks of the project as achieving a recognisably moral improvement to the world; the project participants' discussion of the demonstrations which appears to be an attempt to hold onto some moral integrity; recognition by project members of the impending demonstration to project funders and attempts to understand and pre-empt funders' concerns and questions by designing a suitable demonstration with apparent integrity.

For this material and moral integrity to hold together, the demonstration must operate in a similar manner to a fake artwork. Fake artworks must be able to convincingly index and thus constitute a context (e.g. a history of sales, appearances in auction catalogues). And the work of indexing must appear effortless, as if the artwork and its context have always been what they are; that those called upon to witness the indexing can be confdent that if they were to go to the context (an auction catalogue), it would defnitively point back to the artwork. Our algorithms must similarly index or point to (and thus constitute) a context (the everyday life of a train station and airport, of human-shaped objects and abandoned luggage) in a manner that is suffciently convincing and seemingly effortless that if those called upon to witness the demonstration—such as project funders and ethical experts—went to the context (the train station or airport), they would be pointed back towards the images displayed through the technology (the footage selected by the algorithm showing events). The alerts shown need to be convincingly of the everyday life of the train station or airport (rather than just a few carefully selected cameras) and any and all events that happen to be occurring (rather than just a few select events, from certain angles, in certain lighting conditions, with carefully resourced and placed luggage). This is required for the system to be able to hold together its material and moral integrity and convince the witnesses they don't need to go to the train station or airport and assess the extent to which the footage they have been shown is a complete and natural representation of those spaces. In other words, the technology must be able to show that it can alert us to (index) features of everyday life out there (context) and everyday life out there (context) must be prepared in such a way that it convincingly acts as a material whole with integrity from which alerts (index) have been drawn. The relation of undoubted correspondence must operate thus.

The moral-material integrity holds together for as long as movement from index to context and back is not questioned and the ethical premise of the technology is maintained; if there is a failure in the index-context relation—if it becomes a relation of doubtful correspondence—this would not only question the ethical premise of the project, but also the broader motives of project members in putting on the demonstration. The move—albeit late in the project—to make the demonstration at least partially transparent and accountable, refects this pre-empting of possible concerns that research funders might have held. Idealising the messiness of multiple foor coverings, lighting conditions, ill-disciplined passengers and luggage was relatively easily managed in the train station demonstration as it was displayed through recorded footage. However, idealising the 'live' airport demonstration and maintaining a natural and effortless relation of undoubted correspondence between index and context was much more challenging. Rehearsals, tests (of luggage and those doing abandonment) and off-screen control of the space (e.g. by marking the space where luggage must be dropped) were each likely to compromise the material-moral integrity of the demonstrations. Revealing their idealised features was perhaps unavoidable.

Secondly, the work of our algorithms provides us with an opportunity to review the complexities of morality and vision in demonstrations in new ways. Previously, Smith (2009) has suggested a complex relationship between originals, fabrications and partial fabrications in viewing the staged drama of a demonstration and Coopmans (2010) has argued that revelation and concealment are central to the practices of seeing in demonstrations. These are important contributions, and through our algorithms we can start to note that what the technology sees (e.g. the algorithms turn out to be able to see certain types of non-refective fooring better than others) and the distribution of vision (who and what sees) and the organisation of vision (who and what is in a position to produce an account of who and what), are important issues in the integrity of demonstrations. The train station demonstration can have more or less integrity according to this distribution and organisation of vision. If recorded footage is used but the algorithms do not know what it is they will see, this is noted by project participants as having more integrity than if recorded decision-making by the algorithms is also used. In the event both types of recording were used. Discussions in project meetings around the demonstration for project funders, led to similar questions The algorithms need to see correctly (in classifying luggage as luggage-shaped objects) and to be seen correctly seeing (in producing system results) by, for example, project funders and ethical experts, in order to accomplish the visual-moral integrity to which the project has made claim: that the algorithms can grasp everyday life.

### Conclusion

In this chapter, the focus on demonstrating our algorithms' ability to grasp everyday life, compose accounts of everyday life and become the everyday of the airport and train station, has drawn attention to notions of integrity. Given the project's ethical aims, work to bring a world into being through demonstration can be considered as concerted activities for bringing about a morally approved or better world. The moral terms of demonstrations can thus go towards establishing a basis from which to judge their integrity. Close scrutiny of demonstration work can then open up for analysis two ways of questioning the integrity of the moral world on show. Through material integrity, questions can be asked of the properties of demonstrations, what they seem to be and how they indexically provide for a means to constitute the moral order to which the demonstration attests. Through visual-integrity questions can be posed of who and what is seeing, the management of seeing, what it means to see correctly, and be seen correctly. Material and visual integrity is managed in such a way as to allow for the demonstrations to produce a relation of undoubted correspondence between index and context, establishing the integrity of the material and visual features of the technology: that it sees and has been seen correctly, and that the acts of seeing and those doing the seeing can be noted as having suffcient moral integrity for those acts of seeing to suffce.

The problems our algorithms have with grasping and composing an account of the everyday life of the airport and train station require that this integrity is compromised. Just taking Fig. 5.3 as an example, the human-shaped object composed by the algorithm does not match the human in the real time and real space of the airport (the system has placed a bounding box only around the human's head). Algorithmic time and space has produced a mismatch with airport time and space; the everyday life of the algorithm and the airport are at odds and the relation of undoubted correspondence between algorithmic index and airport context does not have integrity; the algorithm's composition of, rather than grasping of, the human is laid bare. A compromise is required to overcome this mismatch. Years of work to grasp the human-shaped object (Chapter 2), to make that grasping accountable (Chapter 3) and to demarcate relevance from irrelevance (Chapter 4) are all now at stake. In the next Chapter, we will see that our algorithms' problems in seeing and the need to adopt these compromises, poses questions for composing and managing the market value of the technology.

### Notes


http://www.theguardian.com/uk/2008/may/05/nationalarchives.secondworldwar http://news.bbc.co.uk/1/hi/education/1039562.stm; http:// www.timeshighereducation.co.uk/news/fake-verifable-degrees-offered-on-internet/167361.article; and http://www.badscience.net/2008/11/hot-foulair/#more-824.


### References


#### 122 D. NEYLAND

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# Market Value and the Everyday Life of the Algorithm

**Abstract** The fnal chapter explores how a market can be built for an algorithmic system. It draws together studies of algorithms with the growing literature in science and technology studies (STS) on markets and the composition of fnancial value. It uses performativity to explore market making for algorithms. To accomplish market work and build a value for the algorithm, the chapter suggests, the project coordinators had to build a market of willing customers who were then constituted as a means to attract others to (potentially) invest in the system. This fnal chapter will suggest that market work is an important facet of the everyday life of an algorithm, without which algorithmic systems such as the one featured in this book, would not endure. The chapter concludes with an analysis of the distinct and only occasionally integrated everyday lives of the algorithm.

**Keywords** Market making · Market share · Investment · Value · Performativity

## Opening

In Chapter 5, the ability of our algorithms to grasp and compose everyday life in the train station and airport came under signifcant scrutiny. Problems in classifying objects and their action states, issuing alerts and demarcating relevant from irrelevant footage were major concerns for the project participants. This built on the problems experienced in Chapter 4 with the deleting machine that seemed to always leave behind orphan frames. Taken together, this suggests that our algorithms might struggle to become the everyday of the airport and train station at least in their current form. The system architecture, the individual components, the relevancy detection algorithms, the IF-THEN rules might all need more work. And for the computer scientists from University 1 and 2, this was no more or less than they expected: their work in this project built on a decade of research carried out by themselves and colleagues that would extend beyond the fxed time frame of this project into future efforts. Our algorithms might live on in modifed form in whatever the computer scientists or other colleagues chose to do next.

The project coordinators faced a different question. For the coordinators of the project—a European-based consulting frm—the possibilities of developing an ethical, algorithmic surveillance system to take to the market, had provided a compelling reason for their involvement in the project. Deletion, relevancy detection and algorithmic experimentation each had a partial orientation for the coordinators towards a future market. Building a value for the technology following trouble with relevancy detection, object classifcation, object tracking, background subtraction, the issuing of alerts and the deletion system appeared challenging. The coordinators instead looked to switch the basis on which the future of the technology was settled. Recognising that the system's results in demonstrations to end users (see Chapter 5) and the deletion system's audit log (see Chapter 4) would generate a continuing output of demonstrative partial failure, the coordinators instead sought to build an alternative basis for relations with the world beyond the technology. This set of relations would seek to map out a new market value for the technology. In place of technical effcacy as a basis for selling the system, willing customers were constituted as a means to attract others to (potentially) invest in the system. In this chapter, I will suggest that building a world of (potential) customers to attract investors required a broad number of participants, with market trends, sizes and values separated out and made subject to calculation. To do market work and build an investment value required this careful plaiting of relations. I will suggest that the efforts required to shift the focus from technical effcacy to investment can be considered through ideas of performativity.

The chapter will begin with a brief digression through recent writing on performativity, before looking at the coordinators' work to draw investors into new relations with the algorithmic system. I will suggest that these relations operated in a similar manner to the object classifcation of our algorithms: investors, territories, future sales and market size had to be separated out and qualifed, calculated and pacifed in order that these new relations of investment might be developed. The chapter will end with a discussion of where we have reached in the everyday life of our algorithms.

### Performativity

Performativity has played an important part in the recent science and technology studies (STS) turn towards markets and marketing (see, for example, MacKenzie et al. 2007; MacKenzie 2008). The argument draws on the work of Austin (1962) and his notion of a performative utterance or speech act. Cochoy (1998) suggests a performative utterance can be understood as a statement 'that says and does what it says simultaneously' (p. 218). MacKenzie suggests a distinction can be made between utterances that do something and those that report on an already existing state of affairs (2008: 16). The most frequently quoted example, drawing on the work of Austin (1962), is the utterance 'I declare this meeting open'. Such an utterance is said to describe and bring into being the state that it describes—it is a speech act.

Developing this further, Cochoy (1998) suggests: 'a performative science is a science that simultaneously describes and constructs its subject matter. In this respect, the 'performation' of the economy by marketing directly refers to the double aspect of marketing action: conceptualising and enacting the economy at the same time' (p. 218). From this, we could understand that marketing brings the matter it describes into being. For other STS scholars, the focus is attuned to markets rather than marketing. For example, Callon suggests: 'economics in the broadest sense of the term performs, shapes and formats the economy' (1998: 2). Araujo thus suggests that performativity involves market and marketing type statements making themselves true by bringing into being the subject of the statement (2007: 218).

In relation to fnancial markets, MacKenzie looks at the ways in which the work of economists brings markets into being. MacKenzie (2003) suggests that traders use market models to inform their trades, creating market outcomes that match the models. Furthermore, economists' market equations embody a world of relations, prices and outcomes that the use of an equation effectively constitutes. The work of economists can be understood in a similar manner to a Kuhnian problem–solution exemplar; the complexity of the world can be rendered more or less coherent through models and equations which appear to work (i.e. to bring a solution to a problem) and can thus be employed again in other similar situations. The models and equations become paradigmatic couplings of problems and solutions for others to use. As a result, the risks faced by market actors in an otherwise complex, messy and uncertain world become reconceptualised as more or less manageable.

However, MacKenzie suggests that performativity is not a uniform phenomenon; instead he presents three approaches to performativity. First, there is 'generic' performativity in which: 'an aspect of economics (a theory, model, concept, procedure, data set, etc.) is used by participants in economic processes, regulations, etc' (2008: 17). Second, there is 'effective' performativity which involves: 'the practical use of an aspect of economics that has an effect on economic processes' (2008: 17). Third, drawing on the work of Barry Barnes, there is 'Barnes-ian' performativity in which: 'Practical use of an aspect of economics makes economic processes more like their depiction by economists' (2008: 17), and actions change in order to 'better correspond to the model' proposed by economists (2008: 19). We can see these approaches to performativity as moving from weakly formulated to more thorough forms of performativity. However, MacKenzie is clear that such models of performativity do not only operate in one direction. MacKenzie also introduces 'counter performativity' whereby: 'practical use of an aspect of economics makes economic processes less like their depiction by economists' (2008: 17).

Although this provides a provocative set of ideas for thinking through how market value for the algorithmic system might be built, performativity has been critiqued for buying too readily into, or merely confrming, the terms of market participants (Riles 2010; Dorn 2012; Bryan et al. 2012; Foucarde 2007; with a response from MacKenzie and Pardo-Guerra 2013). For Lee and LiPuma: 'The analytical problem is how to extend what has been a speech act-based notion of performativity to other discursively mediated practices, including ritual, economic practices, and even reading' (2002: 193). To switch attention to economic processes requires an expansion of the remit of performativity and a rethinking of the centrality of communication (such as Austin's utterances) and an incorporation of acting and doing. Incorporating this broader set of entities would move us towards an approach developed by Barad (2003) who suggests shifting performativity away from its starting point in studies of language use and questions of representation, towards action (a similar extension is proposed by Butler 1997, 2010).

Although Barad is not focused on markets and forms of economic exchange in her discussion of performativity, the questions she raises appear to resonate with concerns posed to the STS move to engage with markets, calculation and measurement; that performativity might problematically narrow the focus for analytical action. Callon's (2006, 2010) response to the critiques of performativity is that they continue (what he suggests is) Austin's (1962) mistake of assuming statements are in some way separable from their social, cultural or political context. Instead, Callon argues for a need to explore the worlds performed into market action. This will be the starting point for our exploration of the project coordinators' market work: just how do they perform a world of investment into being and what does this tell us of the everyday life of our algorithms?

### Building a Market Value for the Algorithms

In the absence of reliable evidence of technical effcacy and given the apparent diffculties of putting on a convincing demonstration of the algorithms' ability to grasp or compose everyday life, the coordinators drew together a variety of entities to participate in the building of a putative world into which investors could be invited. Building such a world was a complex task requiring calculative dexterity in order to render the emerging world convincing and legible in a document that could be sent to investors. It also required imagination to conjure the entities to be calculated and a compelling narrative into which they could be woven. Still this would be nothing more than a putative world of potential investment. For it to be given performative effect required buy-in from the investors.

First, complex, dextrous and imaginative preparation work took place. The project coordinators segmented the world into geographical regions to be accorded more value (Central and South America with strong predicted growth rates in video-based surveillance), even more value (Canada and Europe with a growing interest in video-based surveillance and a burgeoning privacy-interested legislature and lobby) or less value (the USA with apparently less interest in privacy and a saturated market place for smart video analytics). These segmented geographies were not left as vaguely valued territories, but transformed into specifc and precise calculations of Compound Annual Growth Rates (CAGRs) derived from a combination of expensive industry reports the coordinators had purchased and online sources. In this way, the market for video-based surveillance analysis was calculated to have a CAGR of 15.6% between 2010 and 2016. This was then broken down into the more and less attractive geographical segments previously described.

This provided a very hesitant initial set of calculations on which to build an investment proposition: geographies were segmented and calculated. However, this dextrous and imaginative work to separate and calculate did not end here. Customers were treated in much the same way. Hence governments were identifed as a particular type of customer, tied to more or less attractive geographies. The more attractive governments were calculated as accounting for 17.59% of the video surveillance market and as more likely to be compelled into buying a deletion technology in order to promote their own privacy sensitive credentials. Transport frms were another customer type segmented and calculated as accounting for a further 11% of the video surveillance market with a predicted CAGR of 13.39% between 2010 and 2016. Major transport-based terror attacks were invoked as a basis for this growth in investment, but transport organisations were also identifed as another potentially privacy-concerned customer (this despite the transport companies involved in this project seeming to lose interest in privacy as the project developed). Specifc technological developments were also given the same treatment, with pixel numbers, high defnition cameras and algorithmic forms of data analysis all separated and calculated as growth areas. Finally, video-based surveillance processes such as data storage were also separated out and calculated as a growth area, but with a growing storage cost the kind of cost that could be reduced through deletion. Although this separation and calculation work was directed towards building a putative world into which investors might become enwrapped, the coordinators also worked to distinguish entities as outside or external to this world of potential relations. Hence 44 competitors were also identifed, ranked according to size and spend, and their particular video-based, algorithmic data analysis systems were presented in terms of their inferior capabilities. This despite our algorithms continually running into problems.

The work here by the coordinators was similar to that carried out by our algorithms. Separating out, calculating, preparing and qualifying some entities while disqualifying others (such as competitors), grasping features of the world out there and bringing them to the system, provided the basis for building a potential world of investment relations. Alongside segmented geographies, everything from governments to pixel numbers became entities of this putative market work. The entities segmented and qualifed (and disqualifed) were drawn together into the world of relations in a document entitled 'The Exploitation Report'. Here the qualifed (and disqualifed) entities made sense as providing a basis for investment. At the centre of this world of relations, however, sat our algorithms, the system architecture, its components, and the deleting machine as an investment vehicle whose technical effcacy remained absent from accounts. Technical capabilities remained silent, rendering the Report's content accountably certain and ordered. The preparatory calculations embedded in the Report and the censure of any uncertainty in terms of the demonstrable proof of technical effcacy would now provide the basis for performatively accomplishing an effect: building a world of investors. Through convincing investors that the Report was compelling proof of the viability of investment and that the technological system qualifed as a reasonable investment risk, the coordinators hoped to also build investors into the world of the algorithms.

Inclusions, exclusions and careful calculation provided the means for the coordinators to try and build a compelling narrative that would achieve this performative effect. Rather than relying on a single utterance (as in Austin's illustrative examples of performativity), accomplishing this effect relied on the Report's extended narrative as a means to provide a particular kind of evidence (not of technical effcacy, but of investment potential) on a particular scale (across industries and geographies). In place of uncertainty derived from 44 competitors came the assertion that none of the competitors could deliver as sophisticated a solution as that promised by the project. In place of a concern with governments cutting budgets in times of austerity came the assertion that governments must look to cut costs and therefore should look for the kind of cheap storage solutions that auto-deletion technologies could provide. In place of a concern that a new surveillance system might attract privacy-based criticism came the assertion that this system carried with it and provided a response to that privacy criticism. And in place of any concern from among project members that the technology didn't work came nothing; technological inadequacies were excluded from the Report and its audience. Building this compelling narrative (Simakova and Neyland 2008) was central to accomplishing the performative effect.

From the preceding analysis, we can see that our algorithms are not left to fend for themselves, abandoned as a result of their technical ineffcacies. Neither are they exactly excused from any further role in the project. They are in the Exploitation Report, but their lack of effcacy is excluded. To accomplish the performative effect, they need to be present as an investable entity, at the same moment as key features of their activity are absent. The orderly world of the investment proposition is as much dependent on these absences as the presence of the algorithms. Understanding performativity is not then restricted to single speech acts or even the content of the Exploitation Report alone, but requires understanding the concerted efforts to segment, calculate, and prepare a world of people, things, processes, resources and relationships that the investors can enter. Preparing the putative world for investors involved these presences and absences, but also the possibility of accumulating something further. This built on the segmentation, calculation and preparation work to narrate future returns on investments from building an ethical, algorithmic surveillance system. The system could be invested in and might go on to do the work that might be required of companies in the emerging and changing Data Protection and privacy landscape where such matters as a right to be forgotten (see Chapter 4) have gained momentum. Complying with policy requirements and customer expectations of privacy, and delegating this compliance to our algorithms (or at least, future renditions of our algorithms), might become a marketable good and attain a value.

Following many weeks of labour by the project coordinators in producing 'The Exploitation Report', the preparation work of segmentation, calculation and absenting of certain forms of data (on technical effcacy) was hidden. Making sense of the performativity through which an investment proposition is given effect requires an understanding of this preparatory work, but also cannot ignore the compelling narrative in which it is subsequently involved. Market value here achieves its potential through the segmentation of geographies, technologies, competitors and customers, the apportioning of a calculative value (or non-value) to these entities and evidence from third parties to support the values evidenced. This work is only partly evident in the Report. The outcomes rather than the means of calculation, for example, are made prominent. However, the Report itself also needs consideration. The preparation work to segment, calculate and value entities had to be drawn into a compelling narrative that supported the future development of the algorithmic system. Work was thus done to connect things we all know are happening now (such as government austerity measures and the need to cut budgets) with features of the technological future (such as deletion), to generate a compelling narrative for investment in the algorithmic technologies (in this instance, that austerity measures and cost-cutting could be achieved through deletion by cutting data storage costs). And other things that we know are taking place (such as the introduction of the EU General Data Protection Regulation) could be connected with a range of required activities (compliance with the legislation) that could be accomplished via our algorithms. Certainty in the narration of problems (that these problems exist and will be faced by these customers) and solutions (that this system will address these problems) might prove compelling. At the same time, producing a compelling narrative also required that some numbers (technical effcacy) and forms of calculation (how the world of the Report was prepared) remained absent. This continual switching between temporalities—the world as we know it now and the investable future—and accounts—things to be made available and things to be absented—became the means to attempt to compel investors to join the world of relations being built into the algorithmic system; that its market value would arrive.

### The Everyday Life of the Algorithm

Where does this leave our algorithms? As the slightly embarrassing and incapable project partner to be excluded from fnancial calculations, a waste of time and money? And what does this tell us about the drama played out in current academic writing and in the media (see Introduction), in which algorithms are expected to take over our lives, run wild with our data or operate in ways that we cannot see? To address these questions, we need to step back and take a look at the everyday lives of our algorithms as they have developed throughout the chapters of this book. We need to see just where our algorithms have got to in life to make sense of their proposed future, their social, economic and technical prospects.

In the Introduction, we met the abandoned luggage algorithm and its IF-THEN rules. Little more than a set of step-by-step instructions that set out some conditions and consequences, these rules seemed far removed from the drama of artifcial intelligence, big data and the opaque and inscrutable algorithm. Indeed scrutinising these IF-THEN rules appeared to offer little prospect of a great step forward. They were not about to leap off the page and create great change in the world. In order to understand this algorithm and the drama in which it was expected to participate, we needed to get close to its everyday activity. We needed to know just how this algorithm participated in everyday life, grasped or even composed everyday life and participated in the production of effects. We needed to know something about its prospects of becoming the everyday. It seemed clear that the IF-THEN rules alone would have little consequence. We needed to know who and what the algorithms were working with. Rather than treat the non-human as an incidental fgure (as much of the sociological writing on the everyday has tended to), the algorithm would be accorded a specifc kind of status. As a frst move, we needed to de-centre the human as we know it from the middle of the drama. We could not afford to assume that this was primarily a story to be told by people. We needed to give the algorithm and its technical partners, at least in principle, the same potential agential status as the humans and then we needed to make sense of how they each participated in the composition of effects. We then needed to enter into the varied and only partially integrated everyday lives of the algorithm.

In Chapter 2, the human-shaped object and luggage-shaped object (among other objects) provided a focal point for our engagement with the algorithms' everyday lives. Computer scientists in the project sought an elegant solution that was concise (using only the minimum amount of processing power required) and could solve the problems posed by the project to the satisfaction of various audiences. Here we could get a frst glimpse of how the algorithms might engage in the everyday. Was this grasping everyday life (as if its major constituents were there prior to the work of algorithms, just waiting to be collected and displayed) or composing everyday life (a more fundamental working up from scratch of the objects to be made)? As a surprise to me, an ethnographer with an inclination towards composition, it turned out to be both. The algorithms were in the business of composing the everyday, with models built from scratch of the parameters of what it meant to be human-shaped, classifying small segments of streams of digital video data into putative humans, and then offering those forward as a means to classify the action states of those objects. Even articulating the everyday on these terms seems like a new form of composition, at least in contrast to how we might go about our everyday lives. But these algorithms also needed to grasp the everyday. They were not free to compose without limits as if there was no a priori world from which these objects could be mustered. The life of the airport, the people and objects in it had to be given a life within the algorithmic system that could be traced back through the airport and train station. Actual distances in centimetres, speeds at which people walk, distances covered, the angle and zoom of cameras among many other features of the everyday had to be accorded a form that enabled them to be grasped. And they had to be grasped in such a way by the algorithms that the journey could be made back in the other direction, from algorithm back to train station and airport. These were the demands that an elegant solution had to meet.

So our algorithms were beginning to be competent in grasping and composing everyday life. But their own lives were not without constraints. They were not just in the business of producing results, but demonstratively proving that they had produced the right kind of results. These were outputs that accountably and demonstrably accomplished the project's three ethical aims, to see less, to store less and to do so without creating any new algorithms. Elegance alone was insuffcient. To an ethnographer assessing their ethics, to an ethics board and later in ethical demonstrations, our algorithms had to continually and accountably prove their capabilities. The abandoned luggage, moving the wrong way and movement into a forbidden area algorithms had to work with other system components in Chapter 3, the User Interface, the Route Reconstruction system, probabilistic trees, algorithmic children, parameterisation, classifcation of objects and action states, to collectively demonstrate that everyday life could be improved by the emerging system. This was composition of everyday life, then, but one that was also morally improved. The world was not just grasped, but ethically enhanced. The accountable order that the algorithms could participate in, while in their experimental activities, had to intersect with a more formal sense of accountability. An opportunity had to be developed for future data subjects of algorithmic decision-making and their representatives to question the system. The algorithms also had to engage with the ethics board to begin to give effect to the ethically enhanced world. Unfortunately for our algorithms, these effects and the confdence with which they were demonstrated, began to dissipate as the system moved beyond experimentation.

In Chapter 4, it became clear that the system's ethical aims might have a value beyond experimentation, in accomplishing compliance with new regulatory demands to delete data. Deletion might provide a means to accomplish a market value for our algorithms. Yet it was here that problems began to emerge. As preparations were made to use the algorithms to distinguish between relevant and irrelevant data and provide demonstrative proof that irrelevant data could be effectively and accountably deleted, project members started to disagree. Just what should constitute adequate deletion? Changing the route by which a user connects to data, overwriting, corrupting or expunging data from the system? As the project coordinators sought the most thorough means of deletion possible, as a prior step to developing a market for the system, the computer scientists struggled to match their demands. A system log was developed to produce accountable reports for humans of the algorithms' ability to delete. But the system did no more than continually report the failures of the system: data was not deleted in its entirety, orphan frames were left behind, and the demarcation of relevant from irrelevant data came under scrutiny. The production of nothing (the deleted), required the production of something (an account of deletion), but the failure to successfully accomplish nothing (with deletion undermined by the stubborn presence of orphan frames) created a troubling something—a continually disruptive presence that questioned our algorithms' abilities to produce nothing. Much of everyday life—somewhere between 95 and 99%—it turns out is irrelevant and can be deleted. By failing to grasp all this irrelevance and instead leaving a trail of data and reports that attested to this failure, the prospects of our algorithms becoming the everyday of the airport and train station were diminished.

This was the start of some escalating troubles for the algorithms. As they continued their journey from experimentation, they had to enter into the ever greater wilds of everyday life. From experimentation in settings with matching fooring and lighting, project participants acting out the roles of normal humans, and cameras supplying data from the right angle and height and distance, at the right frame rate for our algorithms to see, our algorithms now had to grasp real space, in real time. Here people, things and events unfolded in a naturally occurring away, across different foorings and lighting conditions, at different frame rates, with humans who now acted in oddly normal ways. Children went this way and that way, adults stood still for too long, luggage did not behave as it ought and humans wore the wrong kinds of outfts that looked just like the airport foor. Grasping and composing this everyday was too challenging. Under test conditions, in place of 6 items of potentially abandoned luggage came 2654 items. The relevant and irrelevant intermingled in a disastrous display of technical ineffcacy. What had seemed like reasonable demonstrations of the algorithms' capabilities to ethical audiences, now had to be questioned. Questions of the material integrity of these demonstrations (and the extent to which a relation of undoubted correspondence could be maintained between the system put on show and the world to which it pointed) were only matched by questions of their visual integrity (of who and what was in a position to see who and what). These questions continued and even grew for a time as our algorithms moved towards their fnal demonstrations to research funders. The king of Event Detection—abandoned luggage—could only be demonstrated through a careful whittling away of confounding variables. The fooring, lighting, luggage-type, positioning, behaviour of the luggage's human owner, frame rate of the camera and other human-shaped objects of the airport each had to be closely controlled. In place of the algorithm going out into the world grasping or composing real time, real space everyday life, a more modest and controlled everyday had to be brought to the system.

And so we fnd in our fnal chapter that the algorithms are somewhat quiet. Away from the drama of contemporary academic writing and popular media stories, the algorithms take up a meek position in an Exploitation Report. In place of any fanfare regarding their technical effcacy, comes a carefully composed account, depending on imaginative and dextrous calculative work. Here, more and less valued geographical regions, customer types and inferior competitors stand in as proxies for our algorithms. The calculations, instead of talking about current technical effcacies, point towards a future potential of market value that could be achieved with investment. The performative accomplishment of the investment proposition negates the need for our algorithms' everyday life to be put on display. At the end, they are not entirely absent from our story, but from the Exploitation Report their grasp and composition of everyday life, their prospects of becoming the everyday of the airport and train station, are deleted. Goodbye algorithm.

### References


Riles, A. (2010). Collateral Expertise: Legal Knowledge in the Global Financial Markets. *Current Anthropology, 51*(6), 795–818.

Simakova, E., & Neyland, D. (2008). Marketing Mobile Futures: Assembling Constituencies and Narrating Compelling Stories for an Emerging Technology. *Marketing Theory, 8*(1), 91–116.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# References

Adkins, L., & Lury, C. (2012). *Measure and Value*. London: Wiley-Blackwell.


Bowker, G., & Star, S. L. (2000). *Sorting Things Out*. Cambridge, MA: MIT Press.

© The Editor(s) (if applicable) and The Author(s) 2019 139 D. Neyland, *The Everyday Life of an Algorithm*, https://doi.org/10.1007/978-3-030-00578-8


*Concise Oxford Dictionary*. (1999). 10th ed. Oxford: Oxford University Press.


USA. Available from: http://governingalgorithms.org/wp-content/ uploads/2013/05/3-paper-introna.pdf.


# Index

#### **A**

Accountability, 7–9, 12, 14, 26, 41, 45–53, 60–68, 74–77, 81, 87–91, 113, 133 Agency, 3, 7, 14, 47, 49, 77, 78 Algorithm, 1–13, 15–17, 22–26, 29, 31–33, 36, 39–41, 45, 46, 48–54, 57, 58, 61, 64–66, 74, 77–79, 81, 85, 88, 93, 97, 103, 105, 107, 110, 112, 113, 115, 118, 123, 132–135 as opaque/inscrutable, 3, 7, 8, 13, 22, 132 and politics, 11 and power, 3, 6–9, 14, 32, 35, 36, 49, 77 Algorithmic, 3–9, 13–16, 22–26, 28, 30, 32–41, 46, 48–53, 55–57, 60–68, 74–76, 79–81, 83–86, 89–91, 94, 95, 100, 102–104, 107, 114, 115, 118, 124–126, 128–131, 133, 134 children, 2, 6, 60, 133, 135 probability, 59 Assessment, 16, 24–26, 47, 50, 61, 94

#### **B**

Blank fgure, 80, 81, 83, 89, 90 Braudel, F., 10

#### **C**

Calculation, 1, 15, 16, 22, 78, 89, 124, 127–131, 135 Composition, 4, 11, 15, 16, 33, 118, 132, 133, 135

## **D**

De Certeau, M., 10 Deletion, 14, 15, 25, 39, 68, 74–83, 86–91, 94, 100, 124, 128, 129, 131, 134 Demonstrations, 15, 16, 22, 62, 94–97, 99–107, 113–117, 124, 133, 135 Displays of virtuosity, 95, 101, 109, 114 Dramaturgical metaphors, 94

© The Editor(s) (if applicable) and The Author(s) 2019 149 D. Neyland, *The Everyday Life of an Algorithm*, https://doi.org/10.1007/978-3-030-00578-8

#### **E**

Effects, 3, 4, 6–9, 15, 22, 32, 33, 48, 50, 132, 134 Elegance, 14, 30, 33–35, 39, 74, 133 Ethics, 24, 25, 41, 50, 61, 62, 96, 100, 101, 103, 113, 114, 133 ethics board, 26, 62, 133, 134 Everyday, 2–5, 7–16, 21–24, 32–36, 39–41, 46, 47, 49–55, 57–62, 65–68, 73, 74, 77, 78, 85, 86, 89, 90, 93, 94, 100, 105, 107, 109, 113–118, 123–125, 127, 131–135 Expectation, 4, 48, 61, 67, 93 Experimentation, 14, 22–25, 29, 31, 33, 36, 38–40, 48–50, 57, 64, 67, 74, 85, 89, 90, 100, 102, 109, 113, 124, 134

#### **F**

Frame-rates, 85

#### **G**

Garfnkel, H., 23, 51, 54, 55, 98, 119 General Data Protection Regulation, 75, 76 Goffman, E., 9, 10 Governmentality, 6, 47, 48 Grasping, 8, 68, 75, 89, 100, 102, 118, 129, 132, 133, 135

#### **H**

Human-shaped objects, 13, 14, 22, 30, 31, 34, 36–38, 40, 55, 58, 59, 63, 78, 85, 101, 115, 135

#### **I**

Integrity, 16, 87, 94, 95, 97–101, 105, 107–109, 113–119, 135 Investment, 4, 16, 124, 125, 127– 131, 135

## **J**

Judgement, 79, 86–88, 95

#### **K**

Knowing, 8, 85, 96 bodies, 10 spaces, 34, 119

## **L**

Lefebvre, H., 10

#### **M**

Markets, 4, 16, 48, 125–127 making, 16 share, 16 Mol, A., 11 Morality, 10, 13, 16, 95, 114, 116

#### **N**

Neo-Foucauldian, 47

#### **O**

Opacity, 3, 5, 26, 45, 47

#### **P**

Performativity, 16, 124–127, 130

Pollner, M., 11, 12, 40 Privacy, 25, 29, 30, 47, 62, 65, 68, 75, 76, 114, 128, 130 Probability, 59 Proof, 15, 33, 88, 129, 134

Testing, 68, 74, 84, 88–90, 94, 95, 100, 104, 110, 113 Transparency, 7, 8, 14, 41, 46, 52, 113

# **U**

Undoing, 80

#### **V**

Value, 3, 8, 15, 16, 47, 65, 68, 74, 75, 78, 80, 88, 90, 91, 96, 99, 118, 124, 126–128, 130, 131, 134, 135 Vision, 104, 107, 114, 116, 117

#### **W**

Witnessing, 2, 94, 97, 114

#### **X**

X marks the spot, 112

#### **Y**

Years of work, 67, 118

#### **Z**

Zero, 80, 81

#### **Q**

Qualculation, 78–81, 83, 84, 87, 89

#### **R**

Revelation and concealment, 95–97, 99, 101, 102, 109, 113, 114, 116 Right to be forgotten, 75, 76, 130

#### **S**

Science and Technology Studies (STS), 14–16, 22, 49, 50, 78, 94, 95, 113, 125, 127 Security, 5, 8, 13–15, 24, 29, 35, 38, 46, 53, 57, 60, 88, , 103 Something and nothing, 80, 81, 83 Success and failure, 15 Surveillance, 5, 6, 24, 25, 28, 30, 32, 34, 36, 46, 51–62, 64, 74, 83, 86, 100, 102–104, 107, 109, 110, 114, 124, 128, 130

#### **T**

Temporality, 47